| Time | Namespace | Component | RelatedObject | Reason | Message |
|---|---|---|---|---|---|
openshift-authentication |
oauth-openshift-74c78cc4c7-nk55v |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-74c78cc4c7-nk55v to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cloud-credential-operator |
cloud-credential-operator-585cd96855-j89wm |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-585cd96855-j89wm to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-86f6b4f867-vvnvr |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-86f6b4f867-vvnvr to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-storage-operator |
cluster-storage-operator-86f6b4f867-vvnvr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-samples-operator |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-68ff7cdcb6-z7zcl to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-multus |
cni-sysctl-allowlist-ds-cqvsc |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-cqvsc to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-multus |
cni-sysctl-allowlist-ds-gbw9t |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-gbw9t to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-multus |
cni-sysctl-allowlist-ds-jdfxk |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-jdfxk to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-multus |
multus-2r78s |
Scheduled |
Successfully assigned openshift-multus/multus-2r78s to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-cluster-node-tuning-operator |
tuned-p8rhd |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-p8rhd to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-7c885b8899-z89zf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-7c885b8899-z89zf |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-7c885b8899-z89zf to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-55594bbb64-rfpvx |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-55594bbb64-rfpvx to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-55594bbb64-w77tp |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-55594bbb64-w77tp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-749f4b99b7-fqnd2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-node-tuning-operator |
tuned-nbxw4 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-nbxw4 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-cluster-node-tuning-operator |
tuned-lc926 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-lc926 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-kube-apiserver-operator |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-749f4b99b7-fqnd2 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-node-tuning-operator |
tuned-kzgnx |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-kzgnx to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-cluster-node-tuning-operator |
tuned-9nnhr |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-9nnhr to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-9bd7f8667-lfs5z to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-node-tuning-operator |
tuned-8bh59 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-8bh59 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5b66777f7c-9pqmc to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-machine-approver |
machine-approver-5697c6f6dd-kpg6d |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-5697c6f6dd-kpg6d to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-machine-approver |
machine-approver-5697c6f6dd-kpg6d |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-storage-operator |
csi-snapshot-webhook-64d5477c9-hl7k6 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-64d5477c9-hl7k6 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-storage-operator |
csi-snapshot-webhook-64d5477c9-wpvpv |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-64d5477c9-wpvpv to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-storage-operator |
csi-snapshot-webhook-74d568664-nsm6t |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-74d568664-nsm6t to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-storage-operator |
csi-snapshot-webhook-74d568664-qv7c9 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-74d568664-qv7c9 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-operator-7ddb788594-zjfz2 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-version |
cluster-version-operator-59fc58bb8-h6cf2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-cluster-version |
cluster-version-operator-59fc58bb8-h6cf2 |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-59fc58bb8-h6cf2 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-config-operator |
openshift-config-operator-85b957bbfc-dwcrh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-config-operator |
openshift-config-operator-85b957bbfc-dwcrh |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-85b957bbfc-dwcrh to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-zqcnw |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-zqcnw to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-console |
console-54d86f69c8-k5dnq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-rlw6r |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-rlw6r to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-j94ng |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-j94ng to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-console |
console-54d86f69c8-k5dnq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-54d86f69c8-k5dnq |
Scheduled |
Successfully assigned openshift-console/console-54d86f69c8-k5dnq to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-console |
console-54d86f69c8-zdmgs |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-54d86f69c8-zdmgs |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-54d86f69c8-zdmgs |
Scheduled |
Successfully assigned openshift-console/console-54d86f69c8-zdmgs to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-kube-storage-version-migrator |
migrator-56fbddbb97-d4szr |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-56fbddbb97-d4szr to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-8bdbc6bbb-txb89 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-hgf9w |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-hgf9w |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-d6rjz |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-d6rjz to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-hgf9w |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-hgf9w |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-hgf9w |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-console |
console-5b488fc55-q5rrh |
Scheduled |
Successfully assigned openshift-console/console-5b488fc55-q5rrh to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-9p6n5 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-9p6n5 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-8ndgb |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-storage-version-migrator-operator |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-86c7d8d555-x49bl to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-console |
console-5b488fc55-tkp84 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5b488fc55-tkp84 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5b488fc55-tkp84 |
Scheduled |
Successfully assigned openshift-console/console-5b488fc55-tkp84 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-service-ca-operator |
service-ca-operator-7bf6f695bf-4rjcs |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-7bf6f695bf-4rjcs to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-service-ca-operator |
service-ca-operator-7bf6f695bf-4rjcs |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-92bwt |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-92bwt to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-console |
console-5ff7f7597d-qc5rb |
Scheduled |
Successfully assigned openshift-console/console-5ff7f7597d-qc5rb to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-8bdbc6bbb-8ndgb |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-8bdbc6bbb-8ndgb to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver-8bdbc6bbb-8ndgb |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-8bdbc6bbb-8ndgb |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-7b485d54c8-xf6hd |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-5zqr7 |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-5zqr7 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-5sbbh |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-5sbbh to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-node-2dhfp |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-2dhfp to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-oauth-apiserver |
apiserver-7b485d54c8-xf6hd |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5ff7f7597d-w7z9h |
Scheduled |
Successfully assigned openshift-console/console-5ff7f7597d-w7z9h to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-799c4c4c77-pmfl8 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-799c4c4c77-pmfl8 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-799c4c4c77-pmfl8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-6689f89885-9kjdn |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-6689f89885-9kjdn |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-console |
console-6689f89885-m729m |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-6689f89885-m729m |
Scheduled |
Successfully assigned openshift-console/console-6689f89885-m729m to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-console |
console-6db848448f-kcnxq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-6db848448f-lt54s |
Scheduled |
Successfully assigned openshift-console/console-6db848448f-lt54s to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-6f5fbdd644-x99cg |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6f5fbdd644-x99cg to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-l2g2b |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-l2g2b |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-console |
console-7ccb568577-lmz2w |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-7ccb568577-lmz2w |
FailedScheduling |
skip schedule deleting pod: openshift-console/console-7ccb568577-lmz2w | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-console |
console-7ccb568577-xpft2 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-multus |
multus-8cm87 |
Scheduled |
Successfully assigned openshift-multus/multus-8cm87 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-console |
console-7ccb568577-xpft2 |
Scheduled |
Successfully assigned openshift-console/console-7ccb568577-xpft2 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-l2g2b |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-l2g2b |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-l2g2b |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-console |
console-849dfdb48-8n92v |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-849dfdb48-8n92v |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-849dfdb48-8n92v |
Scheduled |
Successfully assigned openshift-console/console-849dfdb48-8n92v to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver-6f5fbdd644-l2g2b |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6f5fbdd644-l2g2b to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-gqhhc |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-gqhhc |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-console |
console-849dfdb48-nwk7r |
Scheduled |
Successfully assigned openshift-console/console-849dfdb48-nwk7r to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-gqhhc |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-gqhhc |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6f5fbdd644-gqhhc |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-78fcc99686-zgfxx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't have free ports for the requested pod ports. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't have free ports for the requested pod ports. | ||
openshift-console |
console-bf6f6f7f6-bl5sv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-bf6f6f7f6-bl5sv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-bf6f6f7f6-bl5sv |
Scheduled |
Successfully assigned openshift-console/console-bf6f6f7f6-bl5sv to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-console |
console-bf6f6f7f6-lk9bc |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-bf6f6f7f6-lk9bc |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-bf6f6f7f6-lk9bc |
Scheduled |
Successfully assigned openshift-console/console-bf6f6f7f6-lk9bc to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-78697f4db4-b529n |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-78697f4db4-b529n to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6f5fbdd644-gqhhc |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6f5fbdd644-gqhhc to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-78697f4db4-57qdf |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-78697f4db4-57qdf to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-p5j6s |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-p5j6s |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-p5j6s |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-p5j6s |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-p5j6s |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-6dcfd955f4-p5j6s |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6dcfd955f4-p5j6s to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-p5j6s |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-p5j6s |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-fpnbz |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-fpnbz |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-fpnbz |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-fpnbz |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-fpnbz |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-6dcfd955f4-fpnbz |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6dcfd955f4-fpnbz to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-fpnbz |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-fpnbz |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-2jcfl |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-2jcfl |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-2jcfl |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-2jcfl |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6dcfd955f4-2jcfl |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-6dcfd955f4-2jcfl |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6dcfd955f4-2jcfl to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-2jcfl |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6dcfd955f4-2jcfl |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-x2q4p |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6d668d4fc7-x2q4p to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-x2q4p |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "gcp-pd-csi-driver-controller-745666687f-b5rxc": pod gcp-pd-csi-driver-controller-745666687f-b5rxc is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-2" | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-745666687f-b5rxc to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-x2q4p |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-q9mjb |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6d668d4fc7-q9mjb to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-q9mjb |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-q9mjb |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-pn5rk |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-pn5rk |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-pn5rk |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-pn5rk |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-pn5rk |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-6d668d4fc7-pn5rk |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6d668d4fc7-pn5rk to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-pn5rk |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-pn5rk |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-hfkd9 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6d668d4fc7-hfkd9 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-hfkd9 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-hfkd9 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-d9msv |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-d9msv |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-d9msv |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-d9msv |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-multus |
multus-additional-cni-plugins-62wrg |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-62wrg to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-6d668d4fc7-d9msv |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-6d668d4fc7-d9msv |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-6d668d4fc7-d9msv to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-d9msv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-6d668d4fc7-d9msv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Scheduled |
Successfully assigned openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-745666687f-8zwgp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-669dcc6dbc-rbnvm |
FailedScheduling |
skip schedule deleting pod: openshift-oauth-apiserver/apiserver-669dcc6dbc-rbnvm | ||
openshift-oauth-apiserver |
apiserver-669dcc6dbc-rbnvm |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-tglr4 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-cloud-network-config-controller |
cloud-network-config-controller-7699df78d5-mx8n9 |
Scheduled |
Successfully assigned openshift-cloud-network-config-controller/cloud-network-config-controller-7699df78d5-mx8n9 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cloud-network-config-controller |
cloud-network-config-controller-7699df78d5-mx8n9 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-cloud-credential-operator |
pod-identity-webhook-679666b9-xh4qj |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-679666b9-xh4qj to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-tglr4 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-tglr4 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-tglr4 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-65b45c6554-tglr4 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-65b45c6554-tglr4 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-service-ca |
service-ca-7949b5fbb4-gsbvx |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-7949b5fbb4-gsbvx to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-tglr4 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-tglr4 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-p4w47 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-cloud-credential-operator |
pod-identity-webhook-679666b9-lfgzj |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-p4w47 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-p4w47 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-p4w47 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver-5d5579f445-5twj5 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-5twj5 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-5twj5 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-5d5579f445-5twj5": pod apiserver-5d5579f445-5twj5 is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-0" | ||
openshift-apiserver |
apiserver-5d5579f445-5twj5 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5d5579f445-5twj5 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-p4w47 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-65b45c6554-p4w47 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-65b45c6554-p4w47 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-p4w47 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-p4w47 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-984d7 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-984d7 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-984d7 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-984d7 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-984d7 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver-65b45c6554-984d7 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-65b45c6554-984d7 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-984d7 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-65b45c6554-984d7 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-multus |
multus-additional-cni-plugins-98wpj |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-98wpj to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-k8s-1 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-1 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-oauth-apiserver |
apiserver-58d4d69dd7-f2th2 |
FailedScheduling |
skip schedule deleting pod: openshift-oauth-apiserver/apiserver-58d4d69dd7-f2th2 | ||
openshift-apiserver |
apiserver |
apiserver-5d5579f445-5twj5 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-5d5579f445-5twj5 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-5d5579f445-5twj5 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-5d5579f445-5twj5 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver-58d4d69dd7-f2th2 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-5d5579f445-5twj5 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5d5579f445-zhg9c |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5d5579f445-zhg9c to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver-58d4d69dd7-f2th2 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-586d87f8b7-xg64h |
FailedScheduling |
skip schedule deleting pod: openshift-oauth-apiserver/apiserver-586d87f8b7-xg64h | ||
openshift-oauth-apiserver |
apiserver-586d87f8b7-xg64h |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-cloud-credential-operator |
cloud-credential-operator-585cd96855-j89wm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cloud-controller-manager-operator |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-84676f46cb-btj8b |
FailedScheduling |
skip schedule deleting pod: openshift-cloud-controller-manager/gcp-cloud-controller-manager-84676f46cb-btj8b | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-84676f46cb-btj8b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-84676f46cb-btj8b |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-84676f46cb-74sgj |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/gcp-cloud-controller-manager-84676f46cb-74sgj to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-6658458d69-j98j6 |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/gcp-cloud-controller-manager-6658458d69-j98j6 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-6658458d69-j98j6 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. | ||
openshift-cloud-controller-manager |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Scheduled |
Successfully assigned openshift-cloud-controller-manager/gcp-cloud-controller-manager-6658458d69-bwn2n to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-network-operator |
network-operator-69d4947f66-6pwvp |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-69d4947f66-6pwvp to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-operator |
network-operator-69d4947f66-6pwvp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-txb89 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver-5f8dd75f5c-7rz6r |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5f8dd75f5c-7rz6r to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-txb89 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver-5f8dd75f5c-s5f9w |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-5f8dd75f5c-s5f9w to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-txb89 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver-5f8dd75f5c-z2rvt |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-5f8dd75f5c-z2rvt |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-txb89 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-oauth-apiserver |
apiserver |
apiserver-8bdbc6bbb-txb89 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-controller-manager |
controller-manager-5f544c54d7-4lmsc |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-insights |
insights-operator-7c7bf5974-mt94h |
Scheduled |
Successfully assigned openshift-insights/insights-operator-7c7bf5974-mt94h to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-insights |
insights-operator-7c7bf5974-mt94h |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-apiserver |
apiserver-6545b7bd68-hjg8d |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-hjg8d |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-hjg8d |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6545b7bd68-hjg8d to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-monitoring |
metrics-server-7f98b5f8b5-p26dm |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-7f98b5f8b5-p26dm to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-controller-manager |
controller-manager-5f544c54d7-4lmsc |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-7f98b5f8b5-9v6xq |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-7f98b5f8b5-9v6xq to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-monitoring |
metrics-server-7f98b5f8b5-9v6xq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-7f98b5f8b5-9v6xq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-5ffb7997c-krp7q |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-monitoring |
metrics-server-5ffb7997c-krp7q |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-jsb4b |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-jsb4b |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6545b7bd68-jsb4b to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-f4fb8bb6c-xr2n6 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-f4fb8bb6c-xr2n6 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-f4fb8bb6c-xr2n6 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-f4fb8bb6c-xr2n6 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ingress-operator |
ingress-operator-6b9fd98fb4-hksdp |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-6b9fd98fb4-hksdp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-ingress-operator |
ingress-operator-6b9fd98fb4-hksdp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
metrics-server-5ffb7997c-2fmcw |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-5ffb7997c-2fmcw to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-n6t8j |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-dcf867d89-n6t8j to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-5f544c54d7-4lmsc |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5f544c54d7-4lmsc to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver-6545b7bd68-wnbqx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-wnbqx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6545b7bd68-wnbqx |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6545b7bd68-wnbqx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-monitoring |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-6645c9cbc-qpg45 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-monitoring |
cluster-monitoring-operator-6645c9cbc-qpg45 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-monitoring |
alertmanager-main-1 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-1 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-multus |
multus-additional-cni-plugins-g8lg9 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-g8lg9 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-ingress |
router-default-bbcfc976b-xnpn7 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-xnpn7 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-xnpn7 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-xnpn7 |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-7l2qd |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-dc88f967c-cfpfn |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-dc88f967c-cfpfn |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-7l2qd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5f544c54d7-7l2qd to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver-678f64f7c9-h6bfx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-h6bfx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-h6bfx |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-678f64f7c9-h6bfx to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-5f544c54d7-vlsx8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-marketplace |
redhat-operators-vn4tl |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-vn4tl to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-ingress |
router-default-bbcfc976b-4r8cp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-4r8cp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-4r8cp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-ingress |
router-default-bbcfc976b-4r8cp |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-vlsx8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-vlsx8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-vlsx8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-5f544c54d7-vlsx8 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5f544c54d7-vlsx8 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-rcc6x |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-d8db88b9d-rcc6x to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-rcc6x |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-h6bfx |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-machine-api |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-n6t8j |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-h6bfx |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-h6bfx |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-h6bfx |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-h6bfx |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-678f64f7c9-lh9lm |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-lh9lm |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-lh9lm |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-678f64f7c9-lh9lm to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-n6t8j |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-58sj4 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-d8db88b9d-58sj4 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-58sj4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-58sj4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-58sj4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-n6t8j |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver-dcf867d89-xktv2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-xktv2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-xktv2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-xktv2 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-xktv2 |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-dcf867d89-xktv2 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-machine-api |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-776f9d4bf4-dthxh to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-lh9lm |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-network-operator |
mtu-prober-8c2v8 |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-8c2v8 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-69b9cd8b79-2w8mk |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-lh9lm |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-lh9lm |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-lh9lm |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-lh9lm |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-678f64f7c9-qqtmg |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-qqtmg |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-678f64f7c9-qqtmg |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-678f64f7c9-qqtmg to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-2pwz8 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-d8db88b9d-2pwz8 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-2pwz8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-d8db88b9d-2pwz8 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-69b9cd8b79-2w8mk |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-69b9cd8b79-2w8mk to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-xktv2 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-controller-manager |
controller-manager-69b9cd8b79-6kfgx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-69b9cd8b79-6kfgx |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-69b9cd8b79-6kfgx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-zrhwj |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver-dcf867d89-zrhwj |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-dcf867d89-zrhwj to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-controller-manager |
controller-manager-69b9cd8b79-cwcp4 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-69b9cd8b79-cwcp4 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-69b9cd8b79-cwcp4 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-7b877984c7-pghzh |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7b877984c7-pghzh to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-7b877984c7-pghzh |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7b877984c7-pghzh |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-qqtmg |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-marketplace |
redhat-operators-skqd4 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-skqd4 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-7648bf4f7c-nml8w |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-qqtmg |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-qqtmg |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-qqtmg |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-marketplace |
redhat-operators-pppfb |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-pppfb to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver |
apiserver-678f64f7c9-qqtmg |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-route-controller-manager |
route-controller-manager-7b877984c7-fvxj4 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7b877984c7-fvxj4 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-marketplace |
redhat-operators-pmhhd |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-pmhhd to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-machine-api |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-7648bf4f7c-nml8w to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-7b877984c7-9dd9p |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7b877984c7-9dd9p to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-67866594b6-phrd7 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67866594b6-phrd7 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-67866594b6-m5fxg |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67866594b6-m5fxg to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-operators-lmrfh |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-lmrfh to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
redhat-operators-drwtw |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-drwtw to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-67866594b6-m5fxg |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-6dknb |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-6dknb |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-8v797 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-8v797 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-8v797 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6d6946f85d-8v797 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-67866594b6-2zw6c |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-67866594b6-2zw6c to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-67866594b6-2zw6c |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-rqg9k |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-5486b44d46-rqg9k to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-rqg9k |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-6mvfq |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-5486b44d46-6mvfq to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-6mvfq |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-22bql |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-5486b44d46-22bql to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-route-controller-manager |
route-controller-manager-5486b44d46-22bql |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-marketplace |
redhat-operators-cwhdg |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-cwhdg to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
redhat-operators-9ll4n |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-9ll4n to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
redhat-operators-4vz46 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-4vz46 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-controller-manager |
controller-manager-6b59c47496-mgxqd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-6b59c47496-mgxqd to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-zrhwj |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-zrhwj |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-8v797 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-zrhwj |
TerminationStoppedServing |
Server has stopped listening | |
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-zrhwj |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-8v797 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-8v797 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-8v797 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-8v797 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-oauth-apiserver |
apiserver |
apiserver-dcf867d89-zrhwj |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-6d6946f85d-wdq7x |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d6946f85d-wdq7x |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6d6946f85d-wdq7x to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-74d456756d-fdstp |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-74d456756d-fdstp to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-74d456756d-twm5k |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-74d456756d-twm5k to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-controller-manager |
controller-manager-78b7d7d855-kpv7q |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-78b7d7d855-kpv7q |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-78b7d7d855-kpv7q to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-795448867c-2ht6p |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-795448867c-2ht6p to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-operator |
iptables-alerter-htnrl |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-htnrl to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-zpgfj |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-zpgfj to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-controller-manager |
controller-manager-795448867c-wt2cs |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-image-registry |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-7c8c54f569-rsqg2 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-image-registry |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
catalog-operator-67dc75ccb9-j6m5x |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-additional-cni-plugins-k75xr |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-k75xr to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-marketplace |
redhat-marketplace-tmrr6 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-tmrr6 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-marketplace |
redhat-marketplace-j8qz2 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-j8qz2 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-wdq7x |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-wdq7x |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-wdq7x |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-wdq7x |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-6d6946f85d-wdq7x |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-marketplace |
redhat-marketplace-hhxmm |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-hhxmm to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-h5n44 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-h5n44 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-dfq7f |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-dfq7f to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-ctnwj |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-ctnwj to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-7d926 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-7d926 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
redhat-marketplace-4q6c9 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-4q6c9 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver-6d7dbc56c5-jl6d4 |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d7dbc56c5-jl6d4 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6d7dbc56c5-jl6d4 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-marketplace |
redhat-marketplace-2p9x6 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-2p9x6 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-operator-lifecycle-manager |
catalog-operator-67dc75ccb9-j6m5x |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-67dc75ccb9-j6m5x to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-controller-manager |
controller-manager-795448867c-wt2cs |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-795448867c-wt2cs to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-network-operator |
iptables-alerter-dc4tl |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-dc4tl to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-multus |
multus-additional-cni-plugins-l7nh6 |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-l7nh6 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-controller-manager |
controller-manager-795448867c-z7dl4 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
marketplace-operator-7ddb67b76c-d2flk |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-7ddb67b76c-d2flk to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-marketplace |
marketplace-operator-7ddb67b76c-d2flk |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
community-operators-wj7jh |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-wj7jh to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-vmqhx |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-vmqhx to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-jl6d4 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-jl6d4 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-jl6d4 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-jl6d4 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-marketplace |
community-operators-s6thr |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-s6thr to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-jl6d4 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-6d7dbc56c5-l698n |
FailedScheduling |
0/5 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d7dbc56c5-l698n |
FailedScheduling |
0/5 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-6d7dbc56c5-l698n |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829580-2pbsx |
FailedScheduling |
skip schedule deleting pod: openshift-operator-lifecycle-manager/collect-profiles-28829580-2pbsx | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829595-6kbmk |
FailedScheduling |
0/5 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-795448867c-z7dl4 |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-795448867c-z7dl4 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-network-operator |
iptables-alerter-4qlsv |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-4qlsv to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829610-rrpw2 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829625-629m7 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829625-629m7 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829640-fvzjr |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829640-fvzjr to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829655-d7dgm |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829655-d7dgm to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829670-8jp8p |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829670-8jp8p to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
collect-profiles-28829685-ss2qp |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829685-ss2qp to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-operator-lifecycle-manager |
olm-operator-7497f58c94-vgnwd |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-operator-lifecycle-manager |
olm-operator-7497f58c94-vgnwd |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-7497f58c94-vgnwd to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-l698n |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-l698n |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-l698n |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-l698n |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-6d7dbc56c5-l698n |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-controller-manager |
controller-manager-7f4b9d6458-ltvdx |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7f4b9d6458-ltvdx |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7f4b9d6458-ltvdx |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7f4b9d6458-ltvdx to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-operator-lifecycle-manager |
package-server-manager-f7554d4b7-xd4h9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-apiserver |
apiserver-77d45ddc66-4mc2q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-4mc2q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-4mc2q |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-mpqfk |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-mpqfk |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-mpqfk |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-77d45ddc66-mpqfk to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-operator-lifecycle-manager |
package-server-manager-f7554d4b7-xd4h9 |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-f7554d4b7-xd4h9 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-machine-api |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-controller-manager |
controller-manager-86cf9fc757-rf8dk |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-86cf9fc757-rf8dk |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-86cf9fc757-rf8dk |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-86cf9fc757-rf8dk to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-machine-api |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-7667c744f7-8tlf7 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-operator-lifecycle-manager |
packageserver-784848ddf-lph6j |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-784848ddf-lph6j to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-controllers-7785d897-m4jlj to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-node-identity |
network-node-identity-qfbfs |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-qfbfs to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-node-identity |
network-node-identity-m577s |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-m577s to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-786b85b959-zrm7s |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-controller-manager-operator |
openshift-controller-manager-operator-786b85b959-zrm7s |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-786b85b959-zrm7s to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-mpqfk |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-machine-api |
machine-api-operator-c6cf9575f-k7jtl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-machine-api |
machine-api-operator-c6cf9575f-k7jtl |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-c6cf9575f-k7jtl to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-mpqfk |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-mpqfk |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-mpqfk |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-mpqfk |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-77d45ddc66-sw2kd |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-77d45ddc66-sw2kd |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-77d45ddc66-sw2kd to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-authentication-operator |
authentication-operator-7b558f58f9-nfmbb |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-7b558f58f9-nfmbb to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-authentication-operator |
authentication-operator-7b558f58f9-nfmbb |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-node-identity |
network-node-identity-gjpmc |
Scheduled |
Successfully assigned openshift-network-node-identity/network-node-identity-gjpmc to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-operator-lifecycle-manager |
packageserver-784848ddf-lw2pp |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-784848ddf-lw2pp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-network-diagnostics |
network-check-target-ztlz7 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-ztlz7 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-dns |
dns-default-4lqzz |
Scheduled |
Successfully assigned openshift-dns/dns-default-4lqzz to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-dns |
dns-default-4lqzz |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "dns-default-4lqzz": pod dns-default-4lqzz is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-2" | ||
openshift-network-diagnostics |
network-check-target-zh6rm |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-zh6rm to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-network-diagnostics |
network-check-target-vqt97 |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-vqt97 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-ovn-kubernetes |
ovnkube-control-plane-54656c84bd-cn29j |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-54656c84bd-cn29j to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-authentication |
oauth-openshift-c84b6d8c7-9l5gh |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-c84b6d8c7-9l5gh |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-9958d4595-s94mv |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-9958d4595-s94mv | ||
openshift-marketplace |
community-operators-qbjs8 |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-qbjs8 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-m42qg |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-m42qg to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-l2rlg |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-l2rlg to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-9z5nm |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-9z5nm to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-5bkrv |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-5bkrv to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-authentication |
oauth-openshift-9958d4595-s94mv |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ovn-kubernetes |
ovnkube-control-plane-54656c84bd-zpzjn |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-control-plane-54656c84bd-zpzjn to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver |
apiserver-77d45ddc66-sw2kd |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication |
oauth-openshift-7fbb585d7c-2g9nh |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-7fbb585d7c-2g9nh | ||
openshift-etcd-operator |
etcd-operator-7bbcf99d5c-9746p |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-7bbcf99d5c-9746p to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-etcd-operator |
etcd-operator-7bbcf99d5c-9746p |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-marketplace |
community-operators-2wjrj |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-2wjrj to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
community-operators-24v2d |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-24v2d to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
certified-operators-wnhvq |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-wnhvq to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver-79fb6d9f75-d2mgw |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-d2mgw |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-d2mgw |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-79fb6d9f75-d2mgw to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
certified-operators-nln6m |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-nln6m to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-authentication |
oauth-openshift-7fbb585d7c-2g9nh |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ovn-kubernetes |
ovnkube-node-8bwwx |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-8bwwx to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-ovn-kubernetes |
ovnkube-node-b4b8k |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-b4b8k to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-t6vbt |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-74c78cc4c7-t6vbt to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-ovn-kubernetes |
ovnkube-node-ts5rk |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-ts5rk to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-t6vbt |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-t6vbt |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-qwdfr |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-74c78cc4c7-qwdfr to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-qwdfr |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-74c78cc4c7-qwdfr |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-oauth-apiserver |
apiserver |
apiserver-65b45c6554-tglr4 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-d2mgw |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication |
oauth-openshift-74c78cc4c7-nk55v |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-67d88f768b-zbp7v |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-67d88f768b-zbp7v to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-d2mgw |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-d2mgw |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-d2mgw |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-d2mgw |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-79fb6d9f75-tmvdf |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-tmvdf |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-tmvdf |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-79fb6d9f75-tmvdf to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-authentication |
oauth-openshift-67d88f768b-zbp7v |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-67d88f768b-zbp7v |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-ovn-kubernetes |
ovnkube-node-ccj7k |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-ccj7k to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-authentication |
oauth-openshift-67d88f768b-wblgk |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-67d88f768b-wblgk to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-authentication |
oauth-openshift-67d88f768b-wblgk |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-67d88f768b-wblgk |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-67d88f768b-dqtrl |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-67d88f768b-dqtrl to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-authentication |
oauth-openshift-67d88f768b-dqtrl |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-67d88f768b-dqtrl |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-66d787f86d-n8zks |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-66d787f86d-n8zks to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-authentication |
oauth-openshift-66d787f86d-n8zks |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-66d787f86d-n8zks |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-66d787f86d-ln9xx |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-66d787f86d-ln9xx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-authentication |
oauth-openshift-66d787f86d-ln9xx |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-tmvdf |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication |
oauth-openshift-66d787f86d-9frbd |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-tmvdf |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-tmvdf |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-tmvdf |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-authentication |
oauth-openshift-66d787f86d-9frbd |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-tmvdf |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-79fb6d9f75-wm8d6 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-wm8d6 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-wm8d6 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-79fb6d9f75-wm8d6 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-79fb6d9f75-wm8d6 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-ovn-kubernetes |
ovnkube-node-m7hdx |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-m7hdx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-ovn-kubernetes |
ovnkube-node-qpm7v |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-qpm7v to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-ovn-kubernetes |
ovnkube-node-n6p6d |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-n6p6d to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-ovn-kubernetes |
ovnkube-node-pfhnt |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-pfhnt to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-7b64b578df-w9z5s to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-kube-scheduler-operator |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-target-vkzwz |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-vkzwz to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-diagnostics |
network-check-target-sm44g |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-sm44g to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-network-diagnostics |
network-check-target-6wq7r |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-6wq7r to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-dns |
dns-default-sstbc |
Scheduled |
Successfully assigned openshift-dns/dns-default-sstbc to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-dns |
dns-default-sstbc |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "dns-default-sstbc": pod dns-default-sstbc is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-1" | ||
openshift-dns |
dns-default-zmd45 |
Scheduled |
Successfully assigned openshift-dns/dns-default-zmd45 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-wm8d6 |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-dns |
dns-default-zmd45 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "dns-default-zmd45": pod dns-default-zmd45 is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-0" | ||
openshift-machine-config-operator |
machine-config-controller-54475c996-znc5k |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-54475c996-znc5k to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-wm8d6 |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-wm8d6 |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-wm8d6 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-machine-config-operator |
machine-config-daemon-69dkf |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-69dkf to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-apiserver |
apiserver |
apiserver-79fb6d9f75-wm8d6 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-machine-config-operator |
machine-config-daemon-8lq4q |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-8lq4q to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-machine-config-operator |
machine-config-daemon-p46lt |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-p46lt to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-machine-config-operator |
machine-config-daemon-vf8g9 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-vf8g9 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-multus |
multus-additional-cni-plugins-tqb4j |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-tqb4j to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-marketplace |
certified-operators-l48ct |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-l48ct to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
certified-operators-jtb9t |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-jtb9t to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver |
apiserver-f67c66b4b-sppzf |
FailedScheduling |
0/3 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-f67c66b4b-sppzf |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-f67c66b4b-sppzf to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
certified-operators-hn5hn |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-hn5hn to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
certified-operators-gh6hl |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-gh6hl to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
certified-operators-fbht5 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-fbht5 to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
certified-operators-9fj4p |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-9fj4p to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-marketplace |
certified-operators-8dppf |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-8dppf to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-marketplace |
certified-operators-44t92 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-44t92 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-network-diagnostics |
network-check-source-5ff84586ff-b49fv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-5ff84586ff-b49fv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-dns |
node-resolver-4mbtw |
Scheduled |
Successfully assigned openshift-dns/node-resolver-4mbtw to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-multus |
multus-admission-controller-64669dd88c-b4vtj |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-admission-controller-64669dd88c-b4vtj |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-64669dd88c-b4vtj to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-multus |
multus-admission-controller-64669dd88c-zvr4t |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-admission-controller-64669dd88c-zvr4t |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-64669dd88c-zvr4t to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-machine-config-operator |
machine-config-server-q6rkk |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-q6rkk to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-machine-config-operator |
machine-config-server-btncf |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-btncf to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-sppzf |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-machine-config-operator |
machine-config-server-5qcv6 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-5qcv6 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-machine-config-operator |
machine-config-daemon-xpqvd |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-xpqvd to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-sppzf |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-sppzf |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-sppzf |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-sppzf |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
apiserver-f67c66b4b-tjp8m |
FailedScheduling |
0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. | ||
openshift-apiserver |
apiserver-f67c66b4b-tjp8m |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-f67c66b4b-tjp8m to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-dns |
node-resolver-5bx75 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-5bx75 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-multus |
multus-admission-controller-749bf6f86d-5zx7k |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-749bf6f86d-5zx7k to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-multus |
multus-admission-controller-749bf6f86d-f9cds |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-749bf6f86d-f9cds to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-apiserver-operator |
openshift-apiserver-operator-6846798df4-kwxvp |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-6846798df4-kwxvp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-multus |
multus-hb5v6 |
Scheduled |
Successfully assigned openshift-multus/multus-hb5v6 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-multus |
multus-r69xv |
Scheduled |
Successfully assigned openshift-multus/multus-r69xv to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver-operator |
openshift-apiserver-operator-6846798df4-kwxvp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-dns |
node-resolver-5bx75 |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "node-resolver-5bx75": pod node-resolver-5bx75 is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-2" | ||
openshift-dns |
node-resolver-5f8w4 |
Scheduled |
Successfully assigned openshift-dns/node-resolver-5f8w4 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-dns |
node-resolver-cn9sx |
Scheduled |
Successfully assigned openshift-dns/node-resolver-cn9sx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-multus |
multus-vsdll |
Scheduled |
Successfully assigned openshift-multus/multus-vsdll to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-dns |
node-resolver-cn9sx |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "node-resolver-cn9sx": pod node-resolver-cn9sx is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-0" | ||
openshift-dns |
node-resolver-dgsqw |
Scheduled |
Successfully assigned openshift-dns/node-resolver-dgsqw to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-dns |
node-resolver-dgsqw |
FailedScheduling |
running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "node-resolver-dgsqw": pod node-resolver-dgsqw is already assigned to node "ci-op-2fcpj5j6-f6035-2lklf-master-1" | ||
openshift-dns |
node-resolver-h4skh |
Scheduled |
Successfully assigned openshift-dns/node-resolver-h4skh to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-machine-config-operator |
machine-config-daemon-zhbnq |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-zhbnq to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-dns-operator |
dns-operator-79c9668d4f-5xbr8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-multus |
multus-vxgq8 |
Scheduled |
Successfully assigned openshift-multus/multus-vxgq8 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-dns-operator |
dns-operator-79c9668d4f-5xbr8 |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-79c9668d4f-5xbr8 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-tjp8m |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-network-diagnostics |
network-check-source-5ff84586ff-b49fv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-network-diagnostics |
network-check-source-5ff84586ff-b49fv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-tjp8m |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 50s finished | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-tjp8m |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-tjp8m |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-f67c66b4b-tjp8m |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-network-diagnostics |
network-check-source-5ff84586ff-b49fv |
FailedScheduling |
0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling. | ||
openshift-multus |
network-metrics-daemon-7wllj |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-7wllj to ci-op-2fcpj5j6-f6035-2lklf-master-0 | ||
openshift-network-console |
networking-console-plugin-5cd86b96f5-dh6vw |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-network-console |
networking-console-plugin-5cd86b96f5-4mr48 |
Scheduled |
Successfully assigned openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
openshift-ovn-kubernetes |
ovnkube-node-qfgpz |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-qfgpz to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-multus |
network-metrics-daemon-bmskb |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-bmskb to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | ||
openshift-multus |
network-metrics-daemon-tj4rp |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-tj4rp to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-machine-config-operator |
machine-config-operator-55d6dfd54f-k2phh |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.cloudprovider.kubernetes.io/uninitialized: true}. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | ||
openshift-multus |
network-metrics-daemon-d5jsz |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-d5jsz to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | ||
openshift-multus |
network-metrics-daemon-flk7c |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-flk7c to ci-op-2fcpj5j6-f6035-2lklf-master-2 | ||
openshift-machine-config-operator |
machine-config-operator-55d6dfd54f-k2phh |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-55d6dfd54f-k2phh to ci-op-2fcpj5j6-f6035-2lklf-master-1 | ||
openshift-multus |
network-metrics-daemon-sfvgs |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-sfvgs to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | ||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_21ecccfc-5f40-47e8-8aaa-8cc8c149f689 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_20a31ed9-223c-4a7e-a9b0-0ce6f21c2e1e became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_1af19e02-326e-4b4c-8ba3-8d1fe7d24f4d became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_bf1ec7db-a494-4a5a-8b49-3f32c65a861f became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-e2e-loki namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_b2cad68b-5c38-40a4-a60c-a4c2533b1ae6 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_a79784c4-c75f-48ce-bc43-685314bd7ce6 became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-7446f46455 to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_94fd7365-423a-4c70-aff5-fbcfd96937bb became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-platform-infra namespace | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-7bf6f695bf to 1 | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-7c885b8899 to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-7b64b578df to 1 | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-69d4947f66 to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-79c9668d4f to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-786b85b959 to 1 | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-6846798df4 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-7ddb67b76c to 1 | |
| (x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-7b558f58f9 to 1 | |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-86c7d8d555 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-7bbcf99d5c to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-5b66777f7c to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-9bd7f8667 to 1 | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-6645c9cbc to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-f7554d4b7 to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-749f4b99b7 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-7497f58c94 to 1 | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-6b9fd98fb4 to 1 | |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-7c8c54f569 to 1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-67dc75ccb9 to 1 | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-85b957bbfc to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-55d6dfd54f to 1 | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-86f6b4f867 to 1 | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-c6cf9575f to 1 | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-7c7bf5974 to 1 | |
| (x14) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-7446f46455 |
FailedCreate |
Error creating: pods "cluster-version-operator-7446f46455-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-7bf6f695bf |
FailedCreate |
Error creating: pods "service-ca-operator-7bf6f695bf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-network-operator |
replicaset-controller |
network-operator-69d4947f66 |
FailedCreate |
Error creating: pods "network-operator-69d4947f66-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-786b85b959 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-786b85b959-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-7446f46455 to 0 from 1 | |
| (x14) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6846798df4 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-6846798df4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7b64b578df |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-7b64b578df-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-7c885b8899 |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-7c885b8899-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-59fc58bb8 to 1 | |
| (x14) | openshift-dns-operator |
replicaset-controller |
dns-operator-79c9668d4f |
FailedCreate |
Error creating: pods "dns-operator-79c9668d4f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-7667c744f7 to 1 | |
| (x14) | openshift-marketplace |
replicaset-controller |
marketplace-operator-7ddb67b76c |
FailedCreate |
Error creating: pods "marketplace-operator-7ddb67b76c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-b5cfbb49c to 1 | |
| (x14) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-7b558f58f9 |
FailedCreate |
Error creating: pods "authentication-operator-7b558f58f9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-86c7d8d555 |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-86c7d8d555-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-585cd96855 to 1 | |
| (x14) | openshift-etcd-operator |
replicaset-controller |
etcd-operator-7bbcf99d5c |
FailedCreate |
Error creating: pods "etcd-operator-7bbcf99d5c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-776f9d4bf4 to 1 | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-7648bf4f7c to 1 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-68fd98b997 to 1 | |
| (x14) | openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-5b66777f7c |
FailedCreate |
Error creating: pods "cluster-node-tuning-operator-5b66777f7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-9bd7f8667 |
FailedCreate |
Error creating: pods "csi-snapshot-controller-operator-9bd7f8667-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6645c9cbc |
FailedCreate |
Error creating: pods "cluster-monitoring-operator-6645c9cbc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-f7554d4b7 |
FailedCreate |
Error creating: pods "package-server-manager-f7554d4b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-749f4b99b7 |
FailedCreate |
Error creating: pods "kube-apiserver-operator-749f4b99b7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-ingress-operator |
replicaset-controller |
ingress-operator-6b9fd98fb4 |
FailedCreate |
Error creating: pods "ingress-operator-6b9fd98fb4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-7497f58c94 |
FailedCreate |
Error creating: pods "olm-operator-7497f58c94-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-7c8c54f569 |
FailedCreate |
Error creating: pods "cluster-image-registry-operator-7c8c54f569-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-config-operator |
replicaset-controller |
openshift-config-operator-85b957bbfc |
FailedCreate |
Error creating: pods "openshift-config-operator-85b957bbfc-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-55d6dfd54f |
FailedCreate |
Error creating: pods "machine-config-operator-55d6dfd54f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-67dc75ccb9 |
FailedCreate |
Error creating: pods "catalog-operator-67dc75ccb9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-86f6b4f867 |
FailedCreate |
Error creating: pods "cluster-storage-operator-86f6b4f867-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7648bf4f7c |
FailedCreate |
Error creating: pods "cluster-baremetal-operator-7648bf4f7c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-insights |
replicaset-controller |
insights-operator-7c7bf5974 |
FailedCreate |
Error creating: pods "insights-operator-7c7bf5974-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x14) | openshift-machine-api |
replicaset-controller |
machine-api-operator-c6cf9575f |
FailedCreate |
Error creating: pods "machine-api-operator-c6cf9575f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-776f9d4bf4 |
FailedCreate |
Error creating: pods "cluster-autoscaler-operator-776f9d4bf4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x12) | openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-68fd98b997 |
FailedCreate |
Error creating: pods "cluster-cloud-controller-manager-operator-68fd98b997-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x13) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-59fc58bb8 |
FailedCreate |
Error creating: pods "cluster-version-operator-59fc58bb8-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x13) | openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-7667c744f7 |
FailedCreate |
Error creating: pods "control-plane-machine-set-operator-7667c744f7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x13) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-b5cfbb49c |
FailedCreate |
Error creating: pods "machine-approver-b5cfbb49c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829580 | |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-5697c6f6dd to 1 | |
| (x13) | openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-585cd96855 |
FailedCreate |
Error creating: pods "cloud-credential-operator-585cd96855-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
| (x2) | openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829580 |
FailedCreate |
Error creating: pods "collect-profiles-28829580-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled down replica set machine-approver-b5cfbb49c to 0 from 1 | |
| (x9) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5697c6f6dd |
FailedCreate |
Error creating: pods "machine-approver-5697c6f6dd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-776f9d4bf4 |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-776f9d4bf4-dthxh | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829580 |
SuccessfulCreate |
Created pod: collect-profiles-28829580-2pbsx | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-7648bf4f7c |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-7648bf4f7c-nml8w | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-5697c6f6dd |
SuccessfulCreate |
Created pod: machine-approver-5697c6f6dd-kpg6d | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-68fd98b997 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:43c131e0ab4daf9b297d84bda92ba78bd5df8af483ad8e96e10d05d37cd4a08a" | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-68fd98b997 |
SuccessfulDelete |
Deleted pod: cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled down replica set cluster-cloud-controller-manager-operator-68fd98b997 to 0 from 1 | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Failed |
Error: services have not yet been read at least once, cannot construct envvars | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Failed |
Error: services have not yet been read at least once, cannot construct envvars | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:43c131e0ab4daf9b297d84bda92ba78bd5df8af483ad8e96e10d05d37cd4a08a" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Failed |
Error: services have not yet been read at least once, cannot construct envvars | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-68fd98b997-wpdp9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:43c131e0ab4daf9b297d84bda92ba78bd5df8af483ad8e96e10d05d37cd4a08a" in 3.543s (3.543s including waiting). Image size: 528680210 bytes. | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-rbac-proxy-crio |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-f546c9d4b to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-f546c9d4b |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-7bf6f695bf |
SuccessfulCreate |
Created pod: service-ca-operator-7bf6f695bf-4rjcs | |
openshift-network-operator |
replicaset-controller |
network-operator-69d4947f66 |
SuccessfulCreate |
Created pod: network-operator-69d4947f66-6pwvp | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-7b64b578df |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-7b64b578df-w9z5s | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-786b85b959 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-786b85b959-zrm7s | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-6846798df4 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-6846798df4-kwxvp | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-7c885b8899 |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-7c885b8899-z89zf | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-59fc58bb8 |
SuccessfulCreate |
Created pod: cluster-version-operator-59fc58bb8-h6cf2 | |
openshift-dns-operator |
replicaset-controller |
dns-operator-79c9668d4f |
SuccessfulCreate |
Created pod: dns-operator-79c9668d4f-5xbr8 | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-7667c744f7 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-7667c744f7-8tlf7 | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-7ddb67b76c |
SuccessfulCreate |
Created pod: marketplace-operator-7ddb67b76c-d2flk | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-86c7d8d555 |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-86c7d8d555-x49bl | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-7b558f58f9 |
SuccessfulCreate |
Created pod: authentication-operator-7b558f58f9-nfmbb | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-585cd96855 |
SuccessfulCreate |
Created pod: cloud-credential-operator-585cd96855-j89wm | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-7bbcf99d5c |
SuccessfulCreate |
Created pod: etcd-operator-7bbcf99d5c-9746p | |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Failed |
Error: services have not yet been read at least once, cannot construct envvars |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Failed |
Error: services have not yet been read at least once, cannot construct envvars |
| (x3) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Failed |
Error: services have not yet been read at least once, cannot construct envvars |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-9bd7f8667 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-9bd7f8667-lfs5z | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-5b66777f7c |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-5b66777f7c-9pqmc | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6645c9cbc |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-6645c9cbc-qpg45 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-f7554d4b7 |
SuccessfulCreate |
Created pod: package-server-manager-f7554d4b7-xd4h9 | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-749f4b99b7 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-749f4b99b7-fqnd2 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-7497f58c94 |
SuccessfulCreate |
Created pod: olm-operator-7497f58c94-vgnwd | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-7c8c54f569 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-7c8c54f569-rsqg2 | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-6b9fd98fb4 |
SuccessfulCreate |
Created pod: ingress-operator-6b9fd98fb4-hksdp | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-67dc75ccb9 |
SuccessfulCreate |
Created pod: catalog-operator-67dc75ccb9-j6m5x | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-85b957bbfc |
SuccessfulCreate |
Created pod: openshift-config-operator-85b957bbfc-dwcrh | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-55d6dfd54f |
SuccessfulCreate |
Created pod: machine-config-operator-55d6dfd54f-k2phh | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-86f6b4f867 |
SuccessfulCreate |
Created pod: cluster-storage-operator-86f6b4f867-vvnvr | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-c6cf9575f |
SuccessfulCreate |
Created pod: machine-api-operator-c6cf9575f-k7jtl | |
openshift-insights |
replicaset-controller |
insights-operator-7c7bf5974 |
SuccessfulCreate |
Created pod: insights-operator-7c7bf5974-mt94h | |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-1_openshift-machine-config-operator(586774e3790e1dd6e120c22fdef66776) |
openshift-cloud-controller-manager-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-1_2350e18c-76a6-4613-975c-8497a8660419 |
cluster-cloud-config-sync-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_2350e18c-76a6-4613-975c-8497a8660419 became leader | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
openshift-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:43c131e0ab4daf9b297d84bda92ba78bd5df8af483ad8e96e10d05d37cd4a08a" already present on machine |
| (x4) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Created |
Created container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-1_5574ab72-7d92-4516-b3da-3763a7d8ff65 |
cluster-cloud-controller-manager-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_5574ab72-7d92-4516-b3da-3763a7d8ff65 became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Started |
Started container cluster-cloud-controller-manager | |
default |
cloud-controller-manager-operator |
gcp-cloud-controller-manager:cloud-provider |
ResourceCreateSuccess |
Resource was successfully created | |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-84676f46cb |
SuccessfulCreate |
Created pod: gcp-cloud-controller-manager-84676f46cb-74sgj | |
openshift-cloud-controller-manager |
deployment-controller |
gcp-cloud-controller-manager |
ScalingReplicaSet |
Scaled up replica set gcp-cloud-controller-manager-84676f46cb to 2 | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
default |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
| (x4) | openshift-cloud-controller-manager |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
ConfigurationCheckFailed |
error calculating configuration hash: Secret "gcp-ccm-cloud-credentials" not found |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-84676f46cb |
SuccessfulCreate |
Created pod: gcp-cloud-controller-manager-84676f46cb-btj8b | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
ResourceCreateSuccess |
Resource was successfully created | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
| (x6) | openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-84676f46cb-74sgj |
FailedMount |
MountVolume.SetUp failed for volume "cloud-sa-volume" : secret "gcp-ccm-cloud-credentials" not found |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-84676f46cb |
SuccessfulDelete |
Deleted pod: gcp-cloud-controller-manager-84676f46cb-btj8b | |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-84676f46cb |
SuccessfulDelete |
Deleted pod: gcp-cloud-controller-manager-84676f46cb-74sgj | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
ResourceUpdateSuccess |
Resource was successfully updated | |
openshift-cloud-controller-manager |
deployment-controller |
gcp-cloud-controller-manager |
ScalingReplicaSet |
Scaled down replica set gcp-cloud-controller-manager-84676f46cb to 0 from 2 | |
openshift-cloud-controller-manager |
deployment-controller |
gcp-cloud-controller-manager |
ScalingReplicaSet |
Scaled up replica set gcp-cloud-controller-manager-6658458d69 to 2 | |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-6658458d69 |
SuccessfulCreate |
Created pod: gcp-cloud-controller-manager-6658458d69-j98j6 | |
| (x2) | openshift-cloud-controller-manager |
controllermanager |
gcp-cloud-controller-manager |
NoPods |
No matching pods found |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ef3d5fbb8b8ca09ab404e00e3d616471fdef91190d13610d028995b47d24b2be" | |
openshift-cloud-controller-manager |
replicaset-controller |
gcp-cloud-controller-manager-6658458d69 |
SuccessfulCreate |
Created pod: gcp-cloud-controller-manager-6658458d69-bwn2n | |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ef3d5fbb8b8ca09ab404e00e3d616471fdef91190d13610d028995b47d24b2be" in 2.141s (2.141s including waiting). Image size: 466684215 bytes. | |
default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Synced |
Node synced successfully | |
openshift-cloud-controller-manager |
cloud-controller-manager |
cloud-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_8ed2d44a-bc7a-4804-93f6-f3ce87860073 became leader | |
default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Synced |
Node synced successfully | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-rbac-proxy-crio |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Synced |
Node synced successfully | |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-rbac-proxy-crio |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-j98j6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ef3d5fbb8b8ca09ab404e00e3d616471fdef91190d13610d028995b47d24b2be" | |
openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" | |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-j98j6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ef3d5fbb8b8ca09ab404e00e3d616471fdef91190d13610d028995b47d24b2be" in 2.273s (2.273s including waiting). Image size: 466684215 bytes. | |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-j98j6 |
Started |
Started container cloud-controller-manager | |
openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-j98j6 |
Created |
Created container cloud-controller-manager | |
openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" in 3.767s (3.767s including waiting). Image size: 583081171 bytes. | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_72dfbe8f-b28f-406d-8402-4df596451171 became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-network-operator |
network-operator-management-state-recorder-managementstatecontroller |
network-operator |
StatusNotFound |
Unable to determine current operator status for cluster-network-operator | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-8c2v8 | |
| (x9) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-0_openshift-machine-config-operator(7096de3e2ff01af47d3d10c4673f13f7) |
| (x7) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-machine-config-operator(2f4d77c0d454fb1af17c29157b0dce3d) |
openshift-network-operator |
kubelet |
mtu-prober-8c2v8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" | |
openshift-network-operator |
kubelet |
mtu-prober-8c2v8 |
Started |
Started container prober | |
openshift-network-operator |
kubelet |
mtu-prober-8c2v8 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" in 3.814s (3.814s including waiting). Image size: 583081171 bytes. | |
openshift-network-operator |
kubelet |
mtu-prober-8c2v8 |
Created |
Created container prober | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-rbac-proxy-crio |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
openshift-cloud-network-config-controller |
replicaset-controller |
cloud-network-config-controller-7699df78d5 |
SuccessfulCreate |
Created pod: cloud-network-config-controller-7699df78d5-mx8n9 | |
openshift-cloud-network-config-controller |
deployment-controller |
cloud-network-config-controller |
ScalingReplicaSet |
Scaled up replica set cloud-network-config-controller-7699df78d5 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-multus |
kubelet |
multus-r69xv |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-8cm87 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-62wrg | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-g8lg9 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-vsdll | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-l7nh6 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-r69xv | |
openshift-multus |
kubelet |
multus-vsdll |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-stcnd" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-flk7c | |
openshift-multus |
kubelet |
multus-8cm87 |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-vsdll |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-7wllj | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-tj4rp | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-multus |
kubelet |
multus-vsdll |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
FailedMount |
MountVolume.SetUp failed for volume "cni-binary-copy" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j2vcl" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-8cm87 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-64669dd88c to 2 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-64669dd88c |
SuccessfulCreate |
Created pod: multus-admission-controller-64669dd88c-b4vtj | |
openshift-multus |
replicaset-controller |
multus-admission-controller-64669dd88c |
SuccessfulCreate |
Created pod: multus-admission-controller-64669dd88c-zvr4t | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 7.683s (7.683s including waiting). Image size: 571426836 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 8.201s (8.201s including waiting). Image size: 571426836 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container egress-router-binary-copy | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-54656c84bd |
SuccessfulCreate |
Created pod: ovnkube-control-plane-54656c84bd-zpzjn | |
openshift-ovn-kubernetes |
replicaset-controller |
ovnkube-control-plane-54656c84bd |
SuccessfulCreate |
Created pod: ovnkube-control-plane-54656c84bd-cn29j | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
openshift-ovn-kubernetes |
deployment-controller |
ovnkube-control-plane |
ScalingReplicaSet |
Scaled up replica set ovnkube-control-plane-54656c84bd to 2 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-8bwwx | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-qpm7v | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-b4b8k | |
openshift-multus |
kubelet |
multus-r69xv |
Created |
Created container kube-multus | |
openshift-multus |
kubelet |
multus-r69xv |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-r69xv |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 12.158s (12.158s including waiting). Image size: 1209582329 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-5ff84586ff |
SuccessfulCreate |
Created pod: network-check-source-5ff84586ff-b49fv | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-5ff84586ff to 1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 12.003s (12.003s including waiting). Image size: 571426836 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-sm44g | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-vkzwz | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-6wq7r | |
openshift-multus |
kubelet |
multus-vsdll |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 12.226s (12.226s including waiting). Image size: 1209582329 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container egress-router-binary-copy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Created |
Created container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container egress-router-binary-copy | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-network-node-identity namespace | |
openshift-multus |
kubelet |
multus-vsdll |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-vsdll |
Created |
Created container kube-multus | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Created |
Created container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-multus |
kubelet |
multus-8cm87 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 13.65s (13.65s including waiting). Image size: 1209582329 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-m577s | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-gjpmc | |
openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-network-node-identity |
daemonset-controller |
network-node-identity |
SuccessfulCreate |
Created pod: network-node-identity-qfbfs | |
openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
FailedMount |
MountVolume.SetUp failed for volume "webhook-cert" : secret "network-node-identity-cert" not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 8.867s (8.867s including waiting). Image size: 691795442 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container cni-plugins | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 10.432s (10.432s including waiting). Image size: 691795442 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container cni-plugins | |
| (x8) | openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 13.013s (13.013s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 10.459s (10.459s including waiting). Image size: 1406971151 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 9.033s (9.033s including waiting). Image size: 389927221 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container bond-cni-plugin | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Created |
Created container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 14.64s (14.64s including waiting). Image size: 1406971151 bytes. | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-1_52494673-a4dc-4839-b3ee-84084b947233 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_52494673-a4dc-4839-b3ee-84084b947233 became leader | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Created |
Created container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 14.381s (14.381s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-54656c84bd-zpzjn became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qpm7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container ovn-controller | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container cni-plugins | |
openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 11.924s (11.924s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 14.161s (14.161s including waiting). Image size: 691795442 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 15.477s (15.477s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container kubecfg-setup | |
openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Created |
Created container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 15.254s (15.254s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container ovn-acl-logging | |
default |
ovnkube-csr-approver-controller |
csr-d42fm |
CSRApproved |
CSR "csr-d42fm" has been approved | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container bond-cni-plugin | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 9.781s (9.781s including waiting). Image size: 389927221 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container ovn-acl-logging | |
openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 10.518s (10.518s including waiting). Image size: 1406971151 bytes. | |
openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Created |
Created container webhook | |
openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Started |
Started container webhook | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-tj4rp |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-tj4rp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-7wllj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container routeoverride-cni | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-flk7c |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 3.849s (3.849s including waiting). Image size: 375717862 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-b4b8k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8bwwx |
Started |
Started container sbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 3.63s (3.631s including waiting). Image size: 389927221 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container bond-cni-plugin | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-flk7c |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-7wllj |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 4.312s (4.312s including waiting). Image size: 375717862 bytes. | |
default |
ovnkube-csr-approver-controller |
csr-5plv2 |
CSRApproved |
CSR "csr-5plv2" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" | |
default |
ovnkube-csr-approver-controller |
csr-f9q29 |
CSRApproved |
CSR "csr-f9q29" has been approved | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-b4b8k | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-qpm7v | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulDelete |
Deleted pod: ovnkube-node-8bwwx | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container routeoverride-cni | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-ccj7k | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 4.214s (4.214s including waiting). Image size: 375717862 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-n6p6d | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" in 7.081s (7.081s including waiting). Image size: 580821249 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container whereabouts-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-62wrg |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-n6p6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" in 8.544s (8.544s including waiting). Image size: 580821249 bytes. | |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-sm44g |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-gx988" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-sm44g |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-vkzwz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vwhpx" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-6wq7r |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-xthrx" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-multus |
kubelet |
multus-additional-cni-plugins-l7nh6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-6wq7r |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
default |
ovnkube-csr-approver-controller |
csr-2h6qj |
CSRApproved |
CSR "csr-2h6qj" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" in 8.485s (8.485s including waiting). Image size: 580821249 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ccj7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container whereabouts-cni-bincopy | |
default |
controlplane |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ErrorAddingResource |
adding or updating remote node IC resources ci-op-2fcpj5j6-f6035-2lklf-master-1 failed, err - creating interconnect resources for remote zone node ci-op-2fcpj5j6-f6035-2lklf-master-1 for the network default failed : err - failed to find port binding: tstor-ci-op-2fcpj5j6-f6035-2lklf-master-1, after 10s: context deadline exceeded, object not found | |
default |
controlplane |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ErrorAddingResource |
adding or updating remote node IC resources ci-op-2fcpj5j6-f6035-2lklf-master-2 failed, err - creating interconnect resources for remote zone node ci-op-2fcpj5j6-f6035-2lklf-master-2 for the network default failed : err - failed to find port binding: tstor-ci-op-2fcpj5j6-f6035-2lklf-master-2, after 10s: context deadline exceeded, object not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Created |
Created container whereabouts-cni | |
| (x18) | openshift-network-diagnostics |
kubelet |
network-check-target-vkzwz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-g8lg9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" already present on machine | |
default |
ovnkube-csr-approver-controller |
csr-n4w6t |
CSRApproved |
CSR "csr-n4w6t" has been approved | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-4qlsv | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-786b85b959-zrm7s |
AddedInterface |
Add eth0 [10.130.0.13/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
AddedInterface |
Add eth0 [10.130.0.25/23] from ovn-kubernetes | |
openshift-authentication-operator |
kubelet |
authentication-operator-7b558f58f9-nfmbb |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:432254b8dc65f17472fe6f9bd5a7cde177658799ebb05baede8f91ee2cd62472" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9b6f4ef915a76216ca40d3a5438c63d70e4053019a1b91e4af06ad224ec3a9fe" | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7c885b8899-z89zf |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-7bf6f695bf-4rjcs |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
AddedInterface |
Add eth0 [10.130.0.16/23] from ovn-kubernetes | |
openshift-etcd-operator |
multus |
etcd-operator-7bbcf99d5c-9746p |
AddedInterface |
Add eth0 [10.130.0.29/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" | |
openshift-service-ca-operator |
multus |
service-ca-operator-7bf6f695bf-4rjcs |
AddedInterface |
Add eth0 [10.130.0.11/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:1b8e2349b542c1c7ace19af7d8b375557a8ab9df84e5858e85540714e1e55389" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bce976095feb5321614ec8fda031d8c547cf3d990db7e5244ec28ca78dcbc642" | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
AddedInterface |
Add eth0 [10.130.0.10/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-6846798df4-kwxvp |
AddedInterface |
Add eth0 [10.130.0.22/23] from ovn-kubernetes | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6846798df4-kwxvp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:252cff0c140b9f16ee1902dedf2316ac40cb5b8bdb04bc8b75c84bf44daeda02" | |
openshift-authentication-operator |
multus |
authentication-operator-7b558f58f9-nfmbb |
AddedInterface |
Add eth0 [10.130.0.8/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-7c885b8899-z89zf |
AddedInterface |
Add eth0 [10.130.0.12/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-749f4b99b7-fqnd2 |
AddedInterface |
Add eth0 [10.130.0.17/23] from ovn-kubernetes | |
openshift-cloud-network-config-controller |
multus |
cloud-network-config-controller-7699df78d5-mx8n9 |
AddedInterface |
Add eth0 [10.130.0.34/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Failed |
Error: ErrImagePull | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ce407ee69f5f30ad5abc97ecf508b395e999f09526adcf4fe5c16b43c52b4141": pull QPS exceeded | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-86f6b4f867-vvnvr |
AddedInterface |
Add eth0 [10.130.0.24/23] from ovn-kubernetes | |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" | |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:6f9adb6ccf0dfed45237d3a5459f03a073c02460df59949738526c9b841d4487": pull QPS exceeded | |
openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Failed |
Error: ErrImagePull | |
openshift-network-operator |
kubelet |
iptables-alerter-4qlsv |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5b288899b4ad4ec45171472d7a9d8c5db0504a9399bbb148395ecc91da5c0be" | |
openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5b9e7d9b800f3edfd88efde26c1f252f6373852e486d8d23df953e97839431de": pull QPS exceeded | |
openshift-insights |
multus |
insights-operator-7c7bf5974-mt94h |
AddedInterface |
Add eth0 [10.130.0.31/23] from ovn-kubernetes | |
openshift-config-operator |
multus |
openshift-config-operator-85b957bbfc-dwcrh |
AddedInterface |
Add eth0 [10.130.0.37/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-8cm87 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
| (x2) | openshift-multus |
kubelet |
multus-8cm87 |
Started |
Started container kube-multus |
| (x2) | openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5b9e7d9b800f3edfd88efde26c1f252f6373852e486d8d23df953e97839431de" |
| (x2) | openshift-multus |
kubelet |
multus-8cm87 |
Created |
Created container kube-multus |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Failed |
Error: ImagePullBackOff |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:6f9adb6ccf0dfed45237d3a5459f03a073c02460df59949738526c9b841d4487" |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ce407ee69f5f30ad5abc97ecf508b395e999f09526adcf4fe5c16b43c52b4141" |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Failed |
Error: ImagePullBackOff |
| (x2) | openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Failed |
Error: ImagePullBackOff |
openshift-network-operator |
kubelet |
iptables-alerter-htnrl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-htnrl | |
openshift-network-operator |
kubelet |
iptables-alerter-htnrl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 3.36s (3.36s including waiting). Image size: 563905988 bytes. | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-m7hdx | |
openshift-network-operator |
kubelet |
iptables-alerter-htnrl |
Created |
Created container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-htnrl |
Started |
Started container iptables-alerter | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5b288899b4ad4ec45171472d7a9d8c5db0504a9399bbb148395ecc91da5c0be" in 8.228s (8.228s including waiting). Image size: 420091591 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-network-operator |
kubelet |
iptables-alerter-4qlsv |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 9.244s (9.244s including waiting). Image size: 563905988 bytes. | |
openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" in 9.408s (9.408s including waiting). Image size: 500148391 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container kubecfg-setup | |
openshift-authentication-operator |
kubelet |
authentication-operator-7b558f58f9-nfmbb |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:432254b8dc65f17472fe6f9bd5a7cde177658799ebb05baede8f91ee2cd62472" in 9.644s (9.644s including waiting). Image size: 494440980 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" in 9.457s (9.457s including waiting). Image size: 479171827 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:1b8e2349b542c1c7ace19af7d8b375557a8ab9df84e5858e85540714e1e55389" in 9.489s (9.489s including waiting). Image size: 489811608 bytes. | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bce976095feb5321614ec8fda031d8c547cf3d990db7e5244ec28ca78dcbc642" in 9.345s (9.345s including waiting). Image size: 469441406 bytes. | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Created |
Created container openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Started |
Started container openshift-api | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c0634bf2f0bb787b769eac28c0323ae2558b07adf3b851b5a46ed0c968909a2d" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-7bf6f695bf-4rjcs |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" in 9.528s (9.528s including waiting). Image size: 495446250 bytes. | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" in 9.229s (9.229s including waiting). Image size: 496739801 bytes. | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9b6f4ef915a76216ca40d3a5438c63d70e4053019a1b91e4af06ad224ec3a9fe" in 9.45s (9.45s including waiting). Image size: 474890962 bytes. | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6846798df4-kwxvp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:252cff0c140b9f16ee1902dedf2316ac40cb5b8bdb04bc8b75c84bf44daeda02" in 9.467s (9.467s including waiting). Image size: 494096773 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container ovn-acl-logging | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7c885b8899-z89zf |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" in 9.539s (9.539s including waiting). Image size: 481662796 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container ovn-controller | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to BuildCSIVolumes=true | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-7bf6f695bf-4rjcs_52ce40fa-5faa-4dfc-84a3-2950c5573414 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container nbdb | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "build": map[string]any{ +Â "buildDefaults": map[string]any{"resources": map[string]any{}}, +Â "imageTemplateFormat": map[string]any{ +Â "format": string("registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c"...), +Â }, +Â }, +Â "controllers": []any{ +Â string("openshift.io/build"), string("openshift.io/build-config-change"), +Â string("openshift.io/builder-rolebindings"), +Â string("openshift.io/builder-serviceaccount"), +Â string("-openshift.io/default-rolebindings"), string("openshift.io/deployer"), +Â string("openshift.io/deployer-rolebindings"), +Â string("openshift.io/deployer-serviceaccount"), +Â string("openshift.io/deploymentconfig"), string("openshift.io/image-import"), +Â string("openshift.io/image-puller-rolebindings"), +Â string("openshift.io/image-signature-import"), +Â string("openshift.io/image-trigger"), string("openshift.io/ingress-ip"), +Â string("openshift.io/ingress-to-route"), +Â string("openshift.io/origin-namespace"), ..., +Â }, +Â "deployer": map[string]any{ +Â "imageTemplateFormat": map[string]any{ +Â "format": string("registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a"...), +Â }, +Â }, +Â "featureGates": []any{string("BuildCSIVolumes=true")}, +Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-786b85b959-zrm7s_cf0f01f7-fb62-49fb-9fc6-3fa9150c2171 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7bbcf99d5c-9746p_84a1d438-fdee-46f8-ae96-f22346f73af2 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-network-operator |
kubelet |
iptables-alerter-4qlsv |
Started |
Started container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-4qlsv |
Created |
Created container iptables-alerter | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-64d5477c9 to 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6846798df4-kwxvp_f947f910-28b4-487b-bb40-e0bc7d75c3d5 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceCreated |
Created Service/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7b558f58f9-nfmbb_c35da614-c7c0-486d-afc0-632c6d9ebe7c became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-7c885b8899-z89zf_4da34946-df5a-4df0-902a-357a27f34d82 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-9bd7f8667-lfs5z_a81c6866-459c-456c-932b-17b35c3bbfdc became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7b64b578df-w9z5s_b068be88-aea4-4197-85ce-227bc33e6f21 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-86c7d8d555-x49bl_f0b660d9-849a-4bd9-89fc-91abd0b0966e became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-55594bbb64 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-55594bbb64-w77tp | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-749f4b99b7-fqnd2_c2644c86-a736-43b1-8e75-e00794eb6ba6 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceCreated |
Created Namespace/openshift-service-ca because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-74d456756d to 3 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from Unknown to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-route-controller-manager because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreateFailed |
Failed to create Deployment.apps/route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-55594bbb64 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-55594bbb64-rfpvx | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-55594bbb64-w77tp |
AddedInterface |
Add eth0 [10.130.0.38/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-55594bbb64 to 2 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/csi-snapshot-webhook-clusterrolebinding because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-csisnapshotstaticresourcecontroller |
csi-snapshot-controller-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/csi-snapshot-webhook-pdb -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-64d5477c9 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-64d5477c9-hl7k6 | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-64d5477c9 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-64d5477c9-wpvpv | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
NamespaceCreated |
Created Namespace/openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreateFailed |
Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-csisnapshotgueststaticresourcecontroller |
csi-snapshot-controller-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/csi-snapshot-webhook-clusterrole because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-kubestorageversionmigratorstaticresources |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-controller-manager |
replicaset-controller |
controller-manager-74d456756d |
SuccessfulCreate |
Created pod: controller-manager-74d456756d-twm5k | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-56fbddbb97 to 1 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes",Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment" to "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
| (x8) | openshift-controller-manager |
replicaset-controller |
controller-manager-74d456756d |
FailedCreate |
Error creating: pods "controller-manager-74d456756d-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5a22e664cd05bf6f8a97d2f7b96ad5def60ce4c28d17c9d2d4ef0a14ed70714" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulCreate |
Created pod: route-controller-manager-7b877984c7-9dd9p | |
openshift-controller-manager |
replicaset-controller |
controller-manager-74d456756d |
SuccessfulCreate |
Created pod: controller-manager-74d456756d-fdstp | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulCreate |
Created pod: route-controller-manager-7b877984c7-fvxj4 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulCreate |
Created pod: route-controller-manager-7b877984c7-pghzh | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-56fbddbb97 |
SuccessfulCreate |
Created pod: migrator-56fbddbb97-d4szr | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78b7d7d855 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-74d456756d to 2 from 3 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-high-cpu-usage-alert-controller-highcpuusagealertcontroller |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/cpu-utilization -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-55594bbb64-rfpvx |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
| (x2) | openshift-kube-scheduler |
controllermanager |
openshift-kube-scheduler-guard-pdb |
NoPods |
No matching pods found |
| (x2) | openshift-kube-controller-manager |
controllermanager |
kube-controller-manager-guard-pdb |
NoPods |
No matching pods found |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7b877984c7 to 3 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78b7d7d855 |
SuccessfulCreate |
Created pod: controller-manager-78b7d7d855-kpv7q | |
openshift-kube-storage-version-migrator |
multus |
migrator-56fbddbb97-d4szr |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-0 |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: EvaluationConditionsDetected changed from Unknown to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-rfpvx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5a22e664cd05bf6f8a97d2f7b96ad5def60ce4c28d17c9d2d4ef0a14ed70714" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c0634bf2f0bb787b769eac28c0323ae2558b07adf3b851b5a46ed0c968909a2d" in 3.862s (3.862s including waiting). Image size: 475780589 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-metrics-ci-op-2fcpj5j6-f6035-2lklf-master-0" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-serving-ci-op-2fcpj5j6-f6035-2lklf-master-0" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "All is well" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "controlPlane": map[string]any{"replicas": float64(3)}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-56fbddbb97-d4szr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:25385653e181237073983226b45ebc615db6898af12f9d6ab3cce2a61bd89f31" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-service-ca |
multus |
service-ca-7949b5fbb4-gsbvx |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-7949b5fbb4 to 1 | |
openshift-service-ca |
kubelet |
service-ca-7949b5fbb4-gsbvx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca |
replicaset-controller |
service-ca-7949b5fbb4 |
SuccessfulCreate |
Created pod: service-ca-7949b5fbb4-gsbvx | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-openshiftcontrollermanagerstaticresources-staticresources |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-2 |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
| (x5) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-all-bundles-0,etcd-endpoints-0,etcd-pod-0, secrets: etcd-all-certs-0 |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node ci-op-2fcpj5j6-f6035-2lklf-master-1 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller |
etcd-operator |
TargetUpdateRequired |
"etcd-peer-ci-op-2fcpj5j6-f6035-2lklf-master-0" in "openshift-etcd" requires a new target cert/key pair: secret doesn't exist | |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
FailedMount |
MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-85b957bbfc-dwcrh_cc23cb46-e50f-4fe4-a8e8-161641d7b792 became leader | |
| (x6) | openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
FailedMount |
MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "ReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-fvxj4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" | |
| (x6) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
FailedMount |
MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : secret "mco-proxy-tls" not found |
openshift-controller-manager |
kubelet |
controller-manager-74d456756d-twm5k |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-74d456756d-twm5k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.openshift-global-ca.configmap | |
| (x6) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0") | |
| (x6) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
FailedMount |
MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,MetricsServer=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NodeDisruptionPolicy=true,OpenShiftPodSecurityAdmission=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImagesAWS=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NetworkSegmentation=false,NewOLM=false,NodeSwap=false,OVNObservability=false,OnClusterBuild=false,PersistentIPsForVirtualization=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VSphereMultiVCenters=false,VolumeGroupSnapshot=false | |
openshift-controller-manager |
replicaset-controller |
controller-manager-74d456756d |
SuccessfulDelete |
Deleted pod: controller-manager-74d456756d-fdstp | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: secret doesn't exist | |
| (x6) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found |
| (x6) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
FailedMount |
MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found |
| (x6) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
| (x6) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5a22e664cd05bf6f8a97d2f7b96ad5def60ce4c28d17c9d2d4ef0a14ed70714" in 2.832s (2.832s including waiting). Image size: 436168788 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "RevisionControllerDegraded: configmap \"audit\" not found" | |
| (x6) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7f4b9d6458 to 1 from 0 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-74d456756d to 1 from 2 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from Unknown to False ("All is well") | |
| (x6) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7f4b9d6458 |
SuccessfulCreate |
Created pod: controller-manager-7f4b9d6458-ltvdx | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well"),status.versions changed from [{"feature-gates" ""} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"feature-gates" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "feature-gates" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-m7hdx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "apiServerArguments": map[string]any{ +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â }, +Â "projectConfig": map[string]any{"projectRequestMessage": string("")}, +Â "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â "storageConfig": map[string]any{"urls": []any{string("https://10.0.0.5:2379")}}, Â Â } | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),EvaluationConditionsDetected set to Unknown (""),status.versions changed from [] to [{"feature-gates" ""} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-config-operator |
config-operator-kubecloudconfigcontroller |
openshift-config-operator |
KubeCloudConfigController |
openshift-config-managed/kube-cloud-config ConfigMap was updated | |
openshift-config-operator |
config-operator-kubecloudconfigcontroller |
openshift-config-operator |
ConfigMapCreated |
Created ConfigMap/kube-cloud-config -n openshift-config-managed because it was missing | |
| (x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2024-10-24 13:03:32 +0000 UTC AsExpected } {OperatorProgressing False 2024-10-24 13:03:32 +0000 UTC AsExpected } {OperatorUpgradeable True 2024-10-24 13:03:32 +0000 UTC AsExpected }] | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-rfpvx |
Started |
Started container snapshot-controller | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-rfpvx |
Created |
Created container snapshot-controller | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-rfpvx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5a22e664cd05bf6f8a97d2f7b96ad5def60ce4c28d17c9d2d4ef0a14ed70714" in 2.272s (2.272s including waiting). Image size: 436168788 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-config because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-55594bbb64-w77tp |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-55594bbb64-w77tp became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
NamespaceCreated |
Created Namespace/openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
| (x2) | openshift-kube-apiserver |
controllermanager |
kube-apiserver-guard-pdb |
NoPods |
No matching pods found |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
The master nodes not ready: node "ci-op-2fcpj5j6-f6035-2lklf-master-0" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?) |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-56fbddbb97-d4szr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:25385653e181237073983226b45ebc615db6898af12f9d6ab3cce2a61bd89f31" in 2.216s (2.216s including waiting). Image size: 423058873 bytes. | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-56fbddbb97-d4szr |
Created |
Created container migrator | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: ",Progressing changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-56fbddbb97-d4szr |
Started |
Started container migrator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,MetricsServer=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NodeDisruptionPolicy=true,OpenShiftPodSecurityAdmission=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImagesAWS=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NetworkSegmentation=false,NewOLM=false,NodeSwap=false,OVNObservability=false,OnClusterBuild=false,PersistentIPsForVirtualization=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VSphereMultiVCenters=false,VolumeGroupSnapshot=false |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-config because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods",status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"csi-snapshot-controller" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-74d456756d-fdstp |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-74d456756d-fdstp |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: ",Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 0, desired generation is 1.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ce407ee69f5f30ad5abc97ecf508b395e999f09526adcf4fe5c16b43c52b4141" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metric-serving-ca -n openshift-etcd-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretUpdated |
Updated Secret/etcd-all-certs -n openshift-etcd because it changed | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-service-ca |
kubelet |
service-ca-7949b5fbb4-gsbvx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" in 3.141s (3.141s including waiting). Image size: 495446250 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-control-plane-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/kube-apiserver-to-kubelet-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
| (x2) | openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5b9e7d9b800f3edfd88efde26c1f252f6373852e486d8d23df953e97839431de" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretUpdateFailed |
Failed to update Secret/: Operation cannot be fulfilled on secrets "kube-control-plane-signer": the object has been modified; please apply your changes to the latest version and try again |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"assets/openshift-controller-manager/route-controller-manager-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "extendedArguments": map[string]any{ +Â "cluster-cidr": []any{string("10.128.0.0/14")}, +Â "cluster-name": []any{string("ci-op-2fcpj5j6-f6035-2lklf")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "service-cluster-ip-range": []any{string("172.30.0.0/16")}, +Â }, +Â "featureGates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), +Â string("DisableKubeletCloudCredentialProviders=true"), +Â string("GCPLabelsTags=true"), string("HardwareSpeed=true"), +Â string("IngressControllerLBSubnetsAWS=true"), string("KMSv1=true"), +Â string("ManagedBootImages=true"), string("MetricsServer=true"), +Â string("MultiArchInstallAWS=true"), ..., +Â }, +Â "servingInfo": map[string]any{ +Â "cipherSuites": []any{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:6f9adb6ccf0dfed45237d3a5459f03a073c02460df59949738526c9b841d4487" |
| (x3) | openshift-controller-manager |
kubelet |
controller-manager-74d456756d-twm5k |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-7949b5fbb4-gsbvx_2e03f8c1-f0ab-4093-8690-ee2d46ca2509 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,MetricsServer=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NodeDisruptionPolicy=true,OpenShiftPodSecurityAdmission=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImagesAWS=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NetworkSegmentation=false,NewOLM=false,NodeSwap=false,OVNObservability=false,OnClusterBuild=false,PersistentIPsForVirtualization=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VSphereMultiVCenters=false,VolumeGroupSnapshot=false |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-network-diagnostics |
multus |
network-check-target-6wq7r |
AddedInterface |
Add eth0 [10.130.0.3/23] from ovn-kubernetes | |
default |
ovnkube-csr-approver-controller |
csr-d5vph |
CSRApproved |
CSR "csr-d5vph" has been approved | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-hl85g" is created for OpenShiftAuthenticatorCertRequester | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: " to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-hl85g" has been approved | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd -n openshift-etcd-operator because it was missing | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
| (x10) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources-staticresources |
etcd-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/etcd-minimal -n openshift-etcd-operator because it was missing | |
openshift-network-diagnostics |
multus |
network-check-target-sm44g |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert: configmap doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
| (x2) | openshift-apiserver |
controllermanager |
openshift-apiserver-pdb |
NoPods |
No matching pods found |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled down replica set csi-snapshot-webhook-64d5477c9 to 1 from 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-74d568664 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-74d568664-nsm6t | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-74d568664 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"etcd-pod-0\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "extendedArguments": map[string]any{"cluster-cidr": []any{string("10.128.0.0/14")}, "cluster-name": []any{string("ci-op-2fcpj5j6-f6035-2lklf")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "service-cluster-ip-range": []any{string("172.30.0.0/16")}}, Â Â "featureGates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, +Â "serviceServingCert": map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, Â Â "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, Â Â } | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-64d5477c9 |
SuccessfulDelete |
Deleted pod: csi-snapshot-webhook-64d5477c9-wpvpv | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-64d5477c9-wpvpv |
FailedMount |
MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
| (x5) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-64d5477c9-hl7k6 |
FailedMount |
MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nAPIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/openshift-apiserver-pdb -n openshift-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentUpdated |
Updated Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources-staticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443 |
openshift-cluster-storage-operator |
multus |
csi-snapshot-webhook-74d568664-nsm6t |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
| (x24) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
| (x4) | openshift-controller-manager |
kubelet |
controller-manager-78b7d7d855-kpv7q |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-nsm6t |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]any{\n+\u00a0\t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []any{string(\"https://10.0.0.5:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379 |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:6f9adb6ccf0dfed45237d3a5459f03a073c02460df59949738526c9b841d4487" in 3.96s (3.96s including waiting). Image size: 484450206 bytes. | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ce407ee69f5f30ad5abc97ecf508b395e999f09526adcf4fe5c16b43c52b4141" in 4.97s (4.97s including waiting). Image size: 484426543 bytes. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,kubelet-client,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5b9e7d9b800f3edfd88efde26c1f252f6373852e486d8d23df953e97839431de" in 3.97s (3.97s including waiting). Image size: 485823048 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: secret doesn't exist | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-1 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-operator |
ScalingReplicaSet |
Scaled up replica set gcp-pd-csi-driver-operator-7ddb788594 to 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-nsm6t |
Started |
Started container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-nsm6t |
Created |
Created container webhook | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-nsm6t |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76" in 2.121s (2.121s including waiting). Image size: 419594381 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-operator-7ddb788594 |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-operator-7ddb788594-zjfz2 | |
| (x7) | openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-operator-7ddb788594 |
FailedCreate |
Error creating: pods "gcp-pd-csi-driver-operator-7ddb788594-" is forbidden: error looking up service account openshift-cluster-csi-drivers/gcp-pd-csi-driver-operator: serviceaccount "gcp-pd-csi-driver-operator" not found |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-86f6b4f867-vvnvr_14851382-96bc-49d7-89cd-b92caa94496f became leader | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}],status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] to [{"" "serviceaccounts" "openshift-cluster-csi-drivers" "gcp-pd-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "gcp-pd-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "gcp-pd-csi-driver-operator-rolebinding"} {"rbac.authorization.k8s.io" "clusterroles" "" "gcp-pd-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "gcp-pd-csi-driver-operator-clusterrolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "pd.csi.storage.gke.io"} {"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulCreate |
Created pod: apiserver-5f8dd75f5c-s5f9w | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-GCPPD |
cluster-storage-operator |
DeploymentCreated |
Created Deployment.apps/gcp-pd-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-GCPPD |
cluster-storage-operator |
ClusterCSIDriverCreated |
Created ClusterCSIDriver.operator.openshift.io/pd.csi.storage.gke.io because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to True ("GCPPDProgressing: Waiting for Deployment to act on changes") | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-gcppdcsidriveroperatorstaticcontroller-gcppdcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ServiceAccountCreated |
Created ServiceAccount/gcp-pd-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-gcppdcsidriveroperatorstaticcontroller-gcppdcsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/gcp-pd-csi-driver-operator-role -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-gcppdcsidriveroperatorstaticcontroller-gcppdcsidriveroperatorstaticcontroller |
cluster-storage-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-driver-operator-rolebinding -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-gcppdcsidriveroperatorstaticcontroller-gcppdcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/gcp-pd-csi-driver-operator-clusterrole because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulCreate |
Created pod: apiserver-5f8dd75f5c-7rz6r | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulCreate |
Created pod: apiserver-5f8dd75f5c-z2rvt | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-5f8dd75f5c to 3 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled down replica set csi-snapshot-webhook-64d5477c9 to 0 from 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Available changed from False to True ("All is well") | |
openshift-cluster-csi-drivers |
multus |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:11cfe62fc07450292261dab71b3eb1ef1fc615a24e05c282044403264b567db6" | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-74d568664 |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-74d568664-qv7c9 | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-webhook-74d568664-qv7c9 |
AddedInterface |
Add eth0 [10.130.0.43/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-64d5477c9 |
SuccessfulDelete |
Deleted pod: csi-snapshot-webhook-64d5477c9-hl7k6 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to update pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to update pods" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "GCPPDCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/gcp-pd-csi-driver-operator has some pods failing; unavailable replicas=1" | |
openshift-cluster-storage-operator |
cluster-storage-operator-CSIDriverStarter-gcppdcsidriveroperatorstaticcontroller-gcppdcsidriveroperatorstaticcontroller |
cluster-storage-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-driver-operator-clusterrolebinding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-74d568664 to 2 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDProgressing: Waiting for Deployment to act on changes" to "GCPPDProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]any{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]any{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]any{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]any{\n+\u00a0\t\t\t\"audit-log-format\": []any{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]any{\n+\u00a0\t\t\t\"cipherSuites\": []any{\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:11cfe62fc07450292261dab71b3eb1ef1fc615a24e05c282044403264b567db6" in 2.41s (2.41s including waiting). Image size: 479945655 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,kubelet-client,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverservicemonitorcontroller-gcppddriverservicemonitorcontroller |
gcp-pd-csi-driver-operator |
ServiceMonitorCreated |
Created ServiceMonitor.monitoring.coreos.com/v1 because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddrivercontrollerservicecontroller-deployment-controller--gcppddrivercontrollerservicecontroller |
gcp-pd-csi-driver-operator |
DeploymentCreated |
Created Deployment.apps/gcp-pd-csi-driver-controller -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-csi-config-observer-controller-gcppddrivercsiconfigobservercontroller-config-observer-configobserver |
gcp-pd-csi-driver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator |
gcp-pd-csi-driver-operator-lock |
LeaderElection |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2_02cb35cc-f125-4936-8413-0a1b9151a48b became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (no pods found with labels \"apiserver=true,app=openshift-apiserver-a,openshift-apiserver-anti-affinity=true,revision=1\")",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-csi-config-observer-controller-gcppddrivercsiconfigobservercontroller-config-observer-configobserver |
gcp-pd-csi-driver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ + "targetcsiconfig": map[string]any{ + "servingInfo": map[string]any{ + "cipherSuites": []any{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM"...), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_S"...), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM"...), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_S"...), ..., + }, + "minTLSVersion": string("VersionTLS12"), + }, + }, } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
FailedCreate |
Error creating: pods "gcp-pd-csi-driver-node-" is forbidden: error looking up service account openshift-cluster-csi-drivers/gcp-pd-csi-driver-node-sa: serviceaccount "gcp-pd-csi-driver-node-sa" not found | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
CSIDriverCreated |
Created CSIDriver.storage.k8s.io/pd.csi.storage.gke.io because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator |
gcp-pd-csi-driver-operator |
StorageClassCreated |
Created StorageClass.storage.k8s.io/standard-csi because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator |
gcp-pd-csi-driver-operator |
StorageClassCreated |
Created StorageClass.storage.k8s.io/ssd-csi because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/gcp-pd-csi-driver-controller-sa -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-csi-driver-node-service_gcppddrivernodeservicecontroller-gcppddrivernodeservicecontroller |
gcp-pd-csi-driver-operator |
DaemonSetCreated |
Created DaemonSet.apps/gcp-pd-csi-driver-node -n openshift-cluster-csi-drivers because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set gcp-pd-csi-driver-controller-78697f4db4 to 2 | |
| (x2) | openshift-cluster-csi-drivers |
controllermanager |
gcp-pd-csi-driver-controller-pdb |
NoPods |
No matching pods found |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-csi-config-observer-controller-gcppddrivercsiconfigobservercontroller-config-observer-configobserver |
gcp-pd-csi-driver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/gcp-pd-csi-driver-controller-pdb -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ServiceAccountCreated |
Created ServiceAccount/gcp-pd-csi-driver-node-sa -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ServiceCreated |
Created Service/gcp-pd-csi-driver-controller-metrics -n openshift-cluster-csi-drivers because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-f67c66b4b |
SuccessfulCreate |
Created pod: apiserver-f67c66b4b-sppzf | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/gcp-pd-csi-driver-operator has some pods failing; unavailable replicas=1" to "All is well" | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f8dd75f5c to 2 from 3 | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulDelete |
Deleted pod: apiserver-5f8dd75f5c-z2rvt | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-f67c66b4b to 1 from 0 | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78697f4db4 |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-controller-78697f4db4-57qdf | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/gcp-pd-privileged-role because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-main-attacher-binding because it was missing | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78697f4db4 |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-controller-78697f4db4-b529n | |
| (x7) | openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78697f4db4 |
FailedCreate |
Error creating: pods "gcp-pd-csi-driver-controller-78697f4db4-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[0].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[0].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[0].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[0].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[0].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[0].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[1].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[1].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[1].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[1].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[1].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[1].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[2].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[2].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[2].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[2].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[2].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[2].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[3].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[3].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[3].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[3].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[3].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[3].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[4].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[4].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[4].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[4].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[4].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[4].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[5].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[5].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[5].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[5].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[5].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[5].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[6].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[6].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[6].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[6].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[6].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[6].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[7].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[7].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[7].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[7].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[7].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[7].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[8].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[8].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[8].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[8].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[8].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[8].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider restricted-v2: .containers[9].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[9].containers[0].hostPort: Invalid value: 10301: Host ports are not allowed to be used, provider restricted-v2: .containers[9].containers[2].hostPort: Invalid value: 9202: Host ports are not allowed to be used, provider restricted-v2: .containers[9].containers[4].hostPort: Invalid value: 9203: Host ports are not allowed to be used, provider restricted-v2: .containers[9].containers[6].hostPort: Invalid value: 9204: Host ports are not allowed to be used, provider restricted-v2: .containers[9].containers[8].hostPort: Invalid value: 9205: Host ports are not allowed to be used, provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] |
| (x6) | openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
FailedCreate |
Error creating: pods "gcp-pd-csi-driver-node-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[6]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[7]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[8]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, provider restricted-v2: .containers[0].privileged: Invalid value: true: Privileged containers are not allowed, provider restricted-v2: .containers[0].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[0].containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, provider restricted-v2: .containers[0].containers[1].hostPort: Invalid value: 10303: Host ports are not allowed to be used, provider restricted-v2: .containers[1].privileged: Invalid value: true: Privileged containers are not allowed, provider restricted-v2: .containers[1].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[1].containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, provider restricted-v2: .containers[1].containers[1].hostPort: Invalid value: 10303: Host ports are not allowed to be used, provider restricted-v2: .containers[2].hostNetwork: Invalid value: true: Host network is not allowed to be used, provider restricted-v2: .containers[2].containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, provider restricted-v2: .containers[2].containers[1].hostPort: Invalid value: 10303: Host ports are not allowed to be used, provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-controller-privileged-binding because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-node-privileged-binding because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from True to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator") | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ConfigMapCreated |
Created ConfigMap/gcp-pd-csi-driver-trusted-ca-bundle -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-9p6n5 | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-rlw6r | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-zqcnw | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-main-provisioner-binding because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-volumesnapshot-reader-provisioner-binding because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-86f6b4f867-vvnvr_5626be82-2fcb-42af-a1c2-356ef0fd6638 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,kubelet-client,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,kubelet-client,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/audit-errors -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-requests -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/api-usage -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-basic -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-storageclass-reader-resizer-binding because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration-v1beta3 because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-main-resizer-binding because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78697f4db4-b529n |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78697f4db4 |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-controller-78697f4db4-57qdf | |
| (x6) | openshift-controller-manager |
kubelet |
controller-manager-74d456756d-twm5k |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78fcc99686 |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-controller-78fcc99686-zgfxx | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set gcp-pd-csi-driver-controller-78697f4db4 to 1 from 2 | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set gcp-pd-csi-driver-controller-78fcc99686 to 1 from 0 | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-9dd9p |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/gcp-pd-kube-rbac-proxy-role because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-node-9p6n5 | |
| (x5) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-fvxj4 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-network-operator |
kubelet |
iptables-alerter-dc4tl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-node-rlw6r | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "RevisionControllerDegraded: configmap \"audit\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
| (x2) | openshift-oauth-apiserver |
controllermanager |
oauth-apiserver-pdb |
NoPods |
No matching pods found |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit-0\" not found" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "GCPPDCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,kubelet-client,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/oauth-apiserver-pdb -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources-staticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-dc4tl | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-apiserverstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/podsecurity -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78697f4db4-57qdf |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "gcp-pd-csi-driver-controller-metrics-serving-cert" not found |
openshift-apiserver |
multus |
apiserver-f67c66b4b-sppzf |
AddedInterface |
Add eth0 [10.129.0.6/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-csi-driver-node-service_gcppddrivernodeservicecontroller-gcppddrivernodeservicecontroller |
gcp-pd-csi-driver-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/gcp-pd-csi-driver-node -n openshift-cluster-csi-drivers because it changed | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-node-zqcnw | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-main-snapshotter-binding because it was missing | |
openshift-etcd |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" | |
openshift-apiserver |
taint-eviction-controller |
apiserver-f67c66b4b-sppzf |
TaintManagerEviction |
Cancelling deletion of Pod openshift-apiserver/apiserver-f67c66b4b-sppzf | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources-staticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/kube-apiserver-slos-extended -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/gcp-pd-kube-rbac-proxy-binding because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/gcp-pd-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes",Available message changed from "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment" to "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment\nGCPPDCSIDriverOperatorCRAvailable: GCPPDDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-585cd96855-j89wm |
AddedInterface |
Add eth0 [10.130.0.27/23] from ovn-kubernetes | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverNodeServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"pd.csi.storage.gke.io\": the object has been modified; please apply your changes to the latest version and try again" to "GCPPDCSIDriverOperatorCRDegraded: GCPPDDriverNodeServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"pd.csi.storage.gke.io\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-cloud-controller-manager |
taint-eviction-controller |
gcp-cloud-controller-manager-6658458d69-j98j6 |
TaintManagerEviction |
Cancelling deletion of Pod openshift-cloud-controller-manager/gcp-cloud-controller-manager-6658458d69-j98j6 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GCPPDCSIDriverOperatorDegraded: Operation cannot be fulfilled on storages.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverNodeServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"pd.csi.storage.gke.io\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from False to True ("GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment"),Upgradeable changed from Unknown to True ("All is well") | |
| (x7) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
FailedMount |
MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-machine-api |
multus |
cluster-baremetal-operator-7648bf4f7c-nml8w |
AddedInterface |
Add eth0 [10.130.0.28/23] from ovn-kubernetes | |
| (x11) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-cluster-csi-drivers |
taint-eviction-controller |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
TaintManagerEviction |
Cancelling deletion of Pod openshift-cluster-csi-drivers/gcp-pd-csi-driver-controller-78fcc99686-zgfxx | |
openshift-network-operator |
taint-eviction-controller |
network-operator-69d4947f66-6pwvp |
TaintManagerEviction |
Cancelling deletion of Pod openshift-network-operator/network-operator-69d4947f66-6pwvp | |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-67dc75ccb9-j6m5x |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found |
openshift-cluster-version |
taint-eviction-controller |
cluster-version-operator-59fc58bb8-h6cf2 |
TaintManagerEviction |
Cancelling deletion of Pod openshift-cluster-version/cluster-version-operator-59fc58bb8-h6cf2 | |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found |
openshift-etcd |
multus |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.16/23] from ovn-kubernetes | |
| (x7) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
FailedMount |
MountVolume.SetUp failed for volume "package-server-manager-serving-cert" : secret "package-server-manager-serving-cert" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (no pods found with labels \"apiserver=true,app=openshift-apiserver-a,openshift-apiserver-anti-affinity=true,revision=1\")" to "All is well",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
| (x7) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
| (x7) | openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Created |
Created container kube-rbac-proxy | |
openshift-dns-operator |
multus |
dns-operator-79c9668d4f-5xbr8 |
AddedInterface |
Add eth0 [10.130.0.14/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/gcp-pd-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverstaticresourcescontroller-gcppddriverstaticresourcescontroller |
gcp-pd-csi-driver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/gcp-pd-csi-driver-lease-leader-election -n openshift-cluster-csi-drivers because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-6b59c47496 |
SuccessfulCreate |
Created pod: controller-manager-6b59c47496-mgxqd | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
AddedInterface |
Add eth0 [10.130.0.9/23] from ovn-kubernetes | |
openshift-image-registry |
multus |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
AddedInterface |
Add eth0 [10.130.0.33/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Started |
Started container csi-driver | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorCRDegraded: GCPPDDriverNodeServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"pd.csi.storage.gke.io\": the object has been modified; please apply your changes to the latest version and try again" to "GCPPDCSIDriverOperatorCRDegraded: All is well" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Created |
Created container kube-rbac-proxy | |
openshift-ingress-operator |
multus |
ingress-operator-6b9fd98fb4-hksdp |
AddedInterface |
Add eth0 [10.130.0.26/23] from ovn-kubernetes | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-6b59c47496 to 1 from 0 | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
AddedInterface |
Add eth0 [10.130.0.7/23] from ovn-kubernetes | |
openshift-machine-config-operator |
multus |
machine-config-operator-55d6dfd54f-k2phh |
AddedInterface |
Add eth0 [10.130.0.32/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Created |
Created container csi-driver | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 3.175s (3.175s including waiting). Image size: 536898687 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 3.587s (3.587s including waiting). Image size: 536898687 bytes. | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-74d456756d to 0 from 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-machine-api |
multus |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
AddedInterface |
Add eth0 [10.130.0.15/23] from ovn-kubernetes | |
openshift-controller-manager |
replicaset-controller |
controller-manager-74d456756d |
SuccessfulDelete |
Deleted pod: controller-manager-74d456756d-twm5k | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Created |
Created container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-api |
multus |
machine-api-operator-c6cf9575f-k7jtl |
AddedInterface |
Add eth0 [10.130.0.21/23] from ovn-kubernetes | |
| (x42) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod-0\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmap \"kube-scheduler-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcd-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config started a version change from [] to [{operator 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest} {operator-image registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Created |
Created container csi-node-driver-registrar | |
openshift-etcd |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" in 3.473s (3.473s including waiting). Image size: 500148391 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
| (x4) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "gcp-pd-csi-driver-controller-metrics-serving-cert" not found |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Started |
Started container csi-node-driver-registrar | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 1.986s (1.986s including waiting). Image size: 396191352 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: secret doesn't exist | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: The master nodes not ready: node \"ci-op-2fcpj5j6-f6035-2lklf-master-0\" not ready since 2024-10-24 13:01:04 +0000 UTC because KubeletNotReady (container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?)\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-etcd |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing | |
openshift-etcd |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Started |
Started container csi-node-driver-registrar | |
openshift-network-operator |
kubelet |
iptables-alerter-dc4tl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 5.011s (5.011s including waiting). Image size: 563905988 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 1 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 static pod not found | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Created |
Created container fix-audit-permissions | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/mcn-guards-binding because it was missing | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" in 4.902s (4.902s including waiting). Image size: 537475546 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 3.21s (3.21s including waiting). Image size: 396191352 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Started |
Started container fix-audit-permissions | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/mcn-guards because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-pghzh |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" in 1.441s (1.441s including waiting). Image size: 396574211 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Started |
Started container csi-liveness-probe | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-zhbnq | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-vf8g9 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-xpqvd | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-vf8g9 |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Created |
Created container kube-rbac-proxy | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulCreate |
Created pod: apiserver-6f5fbdd644-x99cg | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-5zqr7 | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nAPIServerDeploymentDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerDeploymentDegraded: \nAPIServerWorkloadDegraded: waiting for .status.latestAvailableRevision to be available\nAPIServerWorkloadDegraded: \nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddriverconditionalstaticresourcescontroller-gcppddriverconditionalstaticresourcescontroller |
gcp-pd-csi-driver-operator |
VolumeSnapshotClassCreated |
Created VolumeSnapshotClass.snapshot.storage.k8s.io/v1 because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Killing |
Stopping container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Killing |
Stopping container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-rlw6r |
Killing |
Stopping container csi-driver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found | |
| (x4) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78697f4db4-b529n |
FailedMount |
MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "gcp-pd-csi-driver-controller-metrics-serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6f5fbdd644 to 3 | |
openshift-network-operator |
kubelet |
iptables-alerter-dc4tl |
Started |
Started container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-dc4tl |
Created |
Created container iptables-alerter | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-x99cg |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulCreate |
Created pod: apiserver-6f5fbdd644-gqhhc | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulCreate |
Created pod: apiserver-6f5fbdd644-l2g2b | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Created |
Created container csi-driver | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set gcp-pd-csi-driver-controller-745666687f to 1 from 0 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available message changed from "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment" to "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment\nGCPPDCSIDriverOperatorCRAvailable: GCPPDDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" | |
openshift-kube-scheduler |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-x99cg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" | |
openshift-oauth-apiserver |
multus |
apiserver-6f5fbdd644-x99cg |
AddedInterface |
Add eth0 [10.130.0.45/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x3) | openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator-gcppddrivercontrollerservicecontroller-deployment-controller--gcppddrivercontrollerservicecontroller |
gcp-pd-csi-driver-operator |
DeploymentUpdated |
Updated Deployment.apps/gcp-pd-csi-driver-controller -n openshift-cluster-csi-drivers because it changed |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Created |
Created container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Started |
Started container csi-liveness-probe | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" already present on machine | |
openshift-kube-scheduler |
multus |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.10/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Started |
Started container csi-driver | |
openshift-oauth-apiserver |
multus |
apiserver-6f5fbdd644-gqhhc |
AddedInterface |
Add eth0 [10.129.0.9/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-745666687f |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-controller-745666687f-8zwgp | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set gcp-pd-csi-driver-controller-78697f4db4 to 0 from 1 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" | |
openshift-oauth-apiserver |
multus |
apiserver-6f5fbdd644-l2g2b |
AddedInterface |
Add eth0 [10.128.0.17/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5zqr7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-zhbnq |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" in 2.844s (2.844s including waiting). Image size: 396574211 bytes. | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78697f4db4 |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-controller-78697f4db4-b529n | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Killing |
Stopping container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Killing |
Stopping container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Killing |
Stopping container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container csi-driver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-zqcnw |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Started |
Started container openshift-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: check-endpoints-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" | |
| (x6) | openshift-apiserver |
kubelet |
apiserver-5f8dd75f5c-7rz6r |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Created |
Created container openshift-apiserver | |
| (x2) | openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available message changed from "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment\nGCPPDCSIDriverOperatorCRAvailable: GCPPDDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" to "GCPPDCSIDriverOperatorCRAvailable: GCPPDDriverControllerServiceControllerAvailable: Waiting for Deployment" |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kube-apiserver-node |
kube-apiserver-operator |
MasterNodesReadyChanged |
All master nodes are ready |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-d6rjz | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-openshiftauthenticationstaticresources-staticresources |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" in 2.601s (2.601s including waiting). Image size: 475806593 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Created |
Created container csi-driver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/mcc-prometheus-k8s -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-puller-binding -n openshift-machine-config-operator because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/managed-bootimages-platform-check because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-puller -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/machine-configuration-guards because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Created |
Created container oauth-apiserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyCreated |
Created ValidatingAdmissionPolicy.admissionregistration.k8s.io/custom-machine-config-pool-selector because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/machine-configuration-guards-binding because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Started |
Started container oauth-apiserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/managed-bootimages-platform-check-binding because it was missing | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-54475c996 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ValidatingAdmissionPolicyBindingCreated |
Created ValidatingAdmissionPolicyBinding.admissionregistration.k8s.io/custom-machine-config-pool-selector-binding because it was missing | |
openshift-machine-config-operator |
multus |
machine-config-controller-54475c996-znc5k |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-54475c996 |
SuccessfulCreate |
Created pod: machine-config-controller-54475c996-znc5k | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: \"service-ca.crt\" key of the \"openshift-authentication/v4-0-config-system-service-ca\" CM is empty\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Started |
Started container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: ",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nSystemServiceCAConfigDegraded: Config \"\" has no service CA data\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-btncf | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-5qcv6 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-q6rkk | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" in 8.897s (8.897s including waiting). Image size: 475806593 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" in 8.791s (8.791s including waiting). Image size: 444232272 bytes. | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-machine-config-operator |
kubelet |
machine-config-server-5qcv6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-server-btncf |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-btncf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-server-5qcv6 |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-5qcv6 |
Created |
Created container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-btncf |
Created |
Created container machine-config-server | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container csi-provisioner | |
openshift-machine-config-operator |
kubelet |
machine-config-server-q6rkk |
Started |
Started container machine-config-server | |
openshift-kube-scheduler |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" in 9.363s (9.363s including waiting). Image size: 479171827 bytes. | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Created |
Created container csi-node-driver-registrar | |
openshift-kube-scheduler |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container installer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container csi-provisioner | |
openshift-kube-scheduler |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
pd.csi.storage.gke.io/1729775045791-6267-pd.csi.storage.gke.io |
pd-csi-storage-gke-io |
LeaderElection |
1729775045791-6267-pd-csi-storage-gke-io became leader | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-59826e19ffd81ce395b52f6b2b19b336 successfully generated (release version: 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, controller version: 54144b315275a22cc9140677792ef5ed0189bcde) | |
openshift-machine-config-operator |
kubelet |
machine-config-server-q6rkk |
Created |
Created container machine-config-server | |
openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" in 9.158s (9.158s including waiting). Image size: 464578123 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Started |
Started container csi-liveness-probe | |
openshift-machine-config-operator |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-1f75404f08afc3926de8a846ea4bc6ff successfully generated (release version: 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, controller version: 54144b315275a22cc9140677792ef5ed0189bcde) | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-d6rjz |
Created |
Created container csi-liveness-probe | |
openshift-machine-config-operator |
kubelet |
machine-config-server-q6rkk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" in 9.326s (9.326s including waiting). Image size: 496739801 bytes. | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Created |
Created container fix-audit-permissions | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Started |
Started container oauth-apiserver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder-events because it was missing | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_10325640-09a6-4e32-a451-62d35e5dd17a became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-1 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n default because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Created |
Created container oauth-apiserver | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-os-builder-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-os-builder-anyuid because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-os-builder -n openshift-machine-config-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-1 -n openshift-kube-controller-manager because it was missing | |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-pghzh |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager: cause by changes in data.config.yaml | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: RequiredPoolsFailed |
Unable to apply 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest: error during syncRequiredMachineConfigPools: context deadline exceeded | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-dc88f967c |
SuccessfulCreate |
Created pod: route-controller-manager-dc88f967c-cfpfn | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-dc88f967c to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7b877984c7 to 2 from 3 | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulDelete |
Deleted pod: apiserver-5f8dd75f5c-7rz6r | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f8dd75f5c to 1 from 2 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-f67c66b4b to 2 from 1 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" in 2.067s (2.067s including waiting). Image size: 441126441 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container csi-attacher | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7b877984c7-pghzh | |
openshift-cluster-csi-drivers |
external-attacher-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-0 |
external-attacher-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0 became leader | |
openshift-apiserver |
replicaset-controller |
apiserver-f67c66b4b |
SuccessfulCreate |
Created pod: apiserver-f67c66b4b-tjp8m | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-apiserver |
multus |
apiserver-f67c66b4b-tjp8m |
AddedInterface |
Add eth0 [10.130.0.46/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-1 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-68ff7cdcb6 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-68ff7cdcb6 |
SuccessfulCreate |
Created pod: cluster-samples-operator-68ff7cdcb6-z7zcl | |
openshift-kube-scheduler |
multus |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.11/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:53185d3f82a587bd8a1361322ade3b5f9c20368772319eb823b233a7896e765f" | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-68ff7cdcb6-z7zcl |
AddedInterface |
Add eth0 [10.129.0.12/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-controller-manager because it was missing | |
| (x25) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod-0\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nRevisionControllerDegraded: configmap \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6f5fbdd644-gqhhc pod, 2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" in 6.411s (6.411s including waiting). Image size: 440932380 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
external-resizer-pd-csi-storage-gke-io/ci-op-2fcpj5j6-f6035-2lklf-master-0 |
external-resizer-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0 became leader | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Created |
Created container csi-resizer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-78fcc99686-zgfxx |
Started |
Started container csi-resizer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 now has machineconfiguration.openshift.io/currentConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 now has machineconfiguration.openshift.io/state=Done | |
openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 now has machineconfiguration.openshift.io/desiredConfig=rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-749bf6f86d to 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-749bf6f86d |
SuccessfulCreate |
Created pod: multus-admission-controller-749bf6f86d-f9cds | |
openshift-cluster-csi-drivers |
external-snapshotter-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-0 |
external-snapshotter-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0 became leader | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:53185d3f82a587bd8a1361322ade3b5f9c20368772319eb823b233a7896e765f" in 7.77s (7.77s including waiting). Image size: 432306176 bytes. | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[2] | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Started |
Started container cluster-samples-operator-watch | |
openshift-multus |
multus |
multus-admission-controller-749bf6f86d-f9cds |
AddedInterface |
Add eth0 [10.129.0.14/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Created |
Created container cluster-samples-operator-watch | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:53185d3f82a587bd8a1361322ade3b5f9c20368772319eb823b233a7896e765f" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Started |
Started container cluster-samples-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-68ff7cdcb6-z7zcl |
Created |
Created container cluster-samples-operator | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Available changed from False to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator\nGCPPDCSIDriverOperatorCRAvailable: All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable message changed from "All is well" to "KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced." | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Failed |
Error: ImagePullBackOff | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Created |
Created container baremetal-kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Error: ErrImagePull | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Failed |
Error: ErrImagePull | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" in 1.754s (1.754s including waiting). Image size: 436172339 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-64669dd88c-zvr4t |
AddedInterface |
Add eth0 [10.130.0.23/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-7497f58c94-vgnwd |
AddedInterface |
Add eth0 [10.130.0.19/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-64669dd88c to 1 from 2 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-64669dd88c |
SuccessfulDelete |
Deleted pod: multus-admission-controller-64669dd88c-b4vtj | |
openshift-multus |
replicaset-controller |
multus-admission-controller-749bf6f86d |
SuccessfulCreate |
Created pod: multus-admission-controller-749bf6f86d-5zx7k | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-67dc75ccb9-j6m5x |
AddedInterface |
Add eth0 [10.130.0.35/23] from ovn-kubernetes | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-749bf6f86d to 2 from 1 | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-67dc75ccb9-j6m5x |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-f7554d4b7-xd4h9 |
AddedInterface |
Add eth0 [10.130.0.20/23] from ovn-kubernetes | |
openshift-multus |
multus |
multus-admission-controller-749bf6f86d-5zx7k |
AddedInterface |
Add eth0 [10.130.0.47/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Created |
Created container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" | |
openshift-multus |
multus |
multus-admission-controller-64669dd88c-b4vtj |
AddedInterface |
Add eth0 [10.130.0.36/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
cluster-monitoring-operator-6645c9cbc-qpg45 |
AddedInterface |
Add eth0 [10.130.0.18/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
marketplace-operator-7ddb67b76c-d2flk |
AddedInterface |
Add eth0 [10.130.0.30/23] from ovn-kubernetes | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Failed |
Error: ImagePullBackOff |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef" |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" | |
openshift-etcd |
static-pod-installer |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed,required configmap/cluster-policy-controller-config has changed" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
| (x2) | openshift-etcd |
controllermanager |
etcd-guard-pdb |
NoPods |
No matching pods found |
| (x17) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
multus |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.19/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" in 2.909s (2.909s including waiting). Image size: 515033120 bytes. | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-multus |
multus |
network-metrics-daemon-7wllj |
AddedInterface |
Add eth0 [10.129.0.4/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-tj4rp |
AddedInterface |
Add eth0 [10.130.0.4/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container guard | |
openshift-multus |
multus |
network-metrics-daemon-flk7c |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-resources-copy | |
| (x2) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config version changed from [] to [{operator 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest} {operator-image registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69}] |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-etcd because it changed | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76" |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-rev | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef" |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Uncordon |
Update completed for config rendered-master-1f75404f08afc3926de8a846ea4bc6ff and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-master-0, currentConfig rendered-master-1f75404f08afc3926de8a846ea4bc6ff to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-master-2, currentConfig rendered-master-1f75404f08afc3926de8a846ea4bc6ff to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1f75404f08afc3926de8a846ea4bc6ff | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Uncordon |
Update completed for config rendered-master-1f75404f08afc3926de8a846ea4bc6ff and node has been uncordoned | |
| (x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
| (x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to AWSEFSDriverVolumeMetrics=true,AdminNetworkPolicy=true,AlibabaPlatform=true,AzureWorkloadIdentity=true,BareMetalLoadBalancer=true,BuildCSIVolumes=true,ChunkSizeMiB=true,CloudDualStackNodeIPs=true,DisableKubeletCloudCredentialProviders=true,GCPLabelsTags=true,HardwareSpeed=true,IngressControllerLBSubnetsAWS=true,KMSv1=true,ManagedBootImages=true,MetricsServer=true,MultiArchInstallAWS=true,MultiArchInstallGCP=true,NetworkDiagnosticsConfig=true,NetworkLiveMigration=true,NodeDisruptionPolicy=true,OpenShiftPodSecurityAdmission=true,PrivateHostedZoneAWS=true,SetEIPForNLBIngressController=true,VSphereControlPlaneMachineSet=true,VSphereDriverConfiguration=true,VSphereStaticIPs=true,ValidatingAdmissionPolicy=true,AWSClusterHostedDNS=false,AdditionalRoutingCapabilities=false,AutomatedEtcdBackup=false,BootcNodeManagement=false,CSIDriverSharedResource=false,ClusterAPIInstall=false,ClusterAPIInstallIBMCloud=false,ClusterMonitoringConfig=false,DNSNameResolver=false,DynamicResourceAllocation=false,EtcdBackendQuota=false,EventedPLEG=false,Example=false,ExternalOIDC=false,GCPClusterHostedDNS=false,GatewayAPI=false,ImageStreamImportMode=false,IngressControllerDynamicConfigurationManager=false,InsightsConfig=false,InsightsConfigAPI=false,InsightsOnDemandDataGather=false,InsightsRuntimeExtractor=false,MachineAPIMigration=false,MachineAPIOperatorDisableMachineHealthCheckController=false,MachineAPIProviderOpenStack=false,MachineConfigNodes=false,ManagedBootImagesAWS=false,MaxUnavailableStatefulSet=false,MetricsCollectionProfiles=false,MixedCPUsAllocation=false,MultiArchInstallAzure=false,NetworkSegmentation=false,NewOLM=false,NodeSwap=false,OVNObservability=false,OnClusterBuild=false,PersistentIPsForVirtualization=false,PinnedImages=false,PlatformOperators=false,ProcMountType=false,RouteAdvertisements=false,RouteExternalCertificate=false,ServiceAccountTokenNodeBinding=false,SignatureStores=false,SigstoreImageVerification=false,TranslateStreamCloseWebsocketRequests=false,UpgradeStatus=false,UserNamespacesPodSecurityStandards=false,UserNamespacesSupport=false,VSphereMultiNetworks=false,VSphereMultiVCenters=false,VolumeGroupSnapshot=false |
| (x109) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-7bf6f695bf-4rjcs_b37ab23a-aa57-4eb7-a646-6232e69db501 became leader | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
| (x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
| (x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379,https://localhost:2379 |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Failed |
Error: ErrImagePull | |
| (x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_AES_128_GCM_SHA256" "TLS_AES_256_GCM_SHA384" "TLS_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
| (x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "admission": map[string]any{ +Â "pluginConfig": map[string]any{ +Â "PodSecurity": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{...}}, +Â "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{...}}, +Â }, +Â }, +Â "apiServerArguments": map[string]any{ +Â "api-audiences": []any{string("https://kubernetes.default.svc")}, +Â "etcd-servers": []any{string("https://10.0.0.5:2379"), string("https://localhost:2379")}, +Â "feature-gates": []any{ +Â string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), +Â string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), +Â string("BareMetalLoadBalancer=true"), string("BuildCSIVolumes=true"), +Â string("ChunkSizeMiB=true"), string("CloudDualStackNodeIPs=true"), ..., +Â }, +Â "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, +Â "send-retry-after-while-not-ready-once": []any{string("false")}, +Â "service-account-issuer": []any{string("https://kubernetes.default.svc")}, +Â "service-account-jwks-uri": []any{string("https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/ope"...)}, +Â }, +Â "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, +Â "servicesSubnet": string("172.30.0.0/16"), +Â "servingInfo": map[string]any{ +Â "bindAddress": string("0.0.0.0:6443"), +Â "bindNetwork": string("tcp4"), +Â "cipherSuites": []any{ +Â string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), +Â string("TLS_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), ..., +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â "namedCertificates": []any{ +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-certs"...), +Â "keyFile": string("/etc/kubernetes/static-pod-certs"...), +Â }, +Â map[string]any{ +Â "certFile": string("/etc/kubernetes/static-pod-resou"...), +Â "keyFile": string("/etc/kubernetes/static-pod-resou"...), +Â }, +Â }, +Â }, Â Â } |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e90b804647b2ffdd6650887ca5bbe6c5d7f4988343ea35f9214a2523b3f5cc76" in 8.956s (8.956s including waiting). Image size: 419594381 bytes. | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" | |
openshift-network-diagnostics |
multus |
network-check-target-vkzwz |
AddedInterface |
Add eth0 [10.129.0.5/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Failed |
Error: ImagePullBackOff | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Started |
Started container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-74d568664-qv7c9 |
Created |
Created container webhook | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" | |
openshift-kube-scheduler |
static-pod-installer |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"kube-scheduler" "1.31.1"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.31.1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Failed |
Error: ErrImagePull | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.6:2380 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
client rate limiter Wait returned an error: context canceled | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
client rate limiter Wait returned an error: context canceled | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7bbcf99d5c-9746p_bab5ae12-dba6-4399-b182-d464571b7536 became leader | |
| (x8) | openshift-apiserver |
kubelet |
apiserver-5f8dd75f5c-s5f9w |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7b64b578df-w9z5s_34e613fa-99a2-4b2b-bbe7-7329b76a5b4f became leader | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" in 24.151s (24.151s including waiting). Image size: 436172339 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" in 24.172s (24.172s including waiting). Image size: 436172339 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.6:2380 | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Killing |
Stopping container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/etcd-endpoints has been created" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.529ebe931a1baebd | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-b4vtj |
Killing |
Stopping container multus-admission-controller | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://10.0.0.5:2379\"),\n+\u00a0\t\t\tstring(\"https://10.0.0.6:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379,https://10.0.0.6:2379 | |
| (x7) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
apiServerArguments.etcd-servers has less than three endpoints: [https://10.0.0.5:2379 https://localhost:2379] |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379,https://10.0.0.6:2379,https://localhost:2379 |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulDelete |
Deleted pod: apiserver-6f5fbdd644-x99cg | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-799c4c4c77 to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6f5fbdd644 to 2 from 3 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-799c4c4c77 |
SuccessfulCreate |
Created pod: apiserver-799c4c4c77-pmfl8 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config:   map[string]any{   "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}},   "apiServerArguments": map[string]any{   "api-audiences": []any{string("https://kubernetes.default.svc")},   "etcd-servers": []any{   string("https://10.0.0.5:2379"), + string("https://10.0.0.6:2379"),   string("https://localhost:2379"),   },   "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...},   "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")},   ... // 3 identical entries   },   "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")},   "servicesSubnet": string("172.30.0.0/16"),   "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...},   } |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.") | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" in 6.319s (6.319s including waiting). Image size: 444232272 bytes. | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container guard | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Error: ErrImagePull | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Failed |
Error: ErrImagePull | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.15/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" in 7.766s (7.766s including waiting). Image size: 897148932 bytes. | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nStaticPodsDegraded: pod/etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 container \"etcd\" is terminated: Error: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:32.165129Z\",\"caller\":\"flags/flag.go:93\",\"msg\":\"unrecognized environment variable\",\"environment-variable\":\"ETCDCTL_ENDPOINTS=\"}\nStaticPodsDegraded: {\"level\":\"warn\",\"ts\":\"2024-10-24T13:04:37.166032Z\",\"logger\":\"etcd-client\",\"caller\":\"v3@v3.5.16/retry_interceptor.go:63\",\"msg\":\"retrying of unary invoker failed\",\"target\":\"etcd-endpoints://0xc000498000/127.0.0.1:2379\",\"attempt\":0,\"error\":\"rpc error: code = DeadlineExceeded desc = latest balancer error: last connection error: connection error: desc = \\\"transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused\\\"\"}\nStaticPodsDegraded: Error: context deadline exceeded\nStaticPodsDegraded: could not parse revision.json, falling back to WAL parsing. Err=open /var/lib/etcd/revision.json: no such file or directorycould not find local cluster id: couldn't find cluster id in WAL or revision: open /var/lib/etcd/member/wal: no such file or directory\nStaticPodsDegraded: open /var/lib/etcd/revision.json: no such file or directory\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1]\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-1 -n openshift-kube-apiserver because it was missing | |
openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Created |
Created container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Created |
Created container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Failed |
Error: ImagePullBackOff | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
| (x31) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-67dc75ccb9-j6m5x |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" in 29.576s (29.576s including waiting). Image size: 841241863 bytes. | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-67dc75ccb9-j6m5x |
Created |
Created container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-67dc75ccb9-j6m5x |
Started |
Started container catalog-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Failed |
Error: ImagePullBackOff | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f" | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Failed |
Error: ImagePullBackOff | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7b558f58f9-nfmbb_13d9ea19-1805-456b-a8c5-b0788a61d179 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-86c7d8d555-x49bl_4d299198-5d11-484d-8e20-b80154955cc7 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6846798df4-kwxvp_b43b5000-4e4d-40af-9381-dce9b563a12b became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" in 3.711s (3.711s including waiting). Image size: 441126441 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-all-bundles-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 2 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 static pod not found | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.5:2379,https://10.0.0.6:2379 | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ string("https://10.0.0.5:2379"), + string("https://10.0.0.6:2379"), }, }, } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-1 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-x99cg |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-x99cg |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-controller-manager |
multus |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.16/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
| (x26) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 10.0.0.5 |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" in 4.431s (4.431s including waiting). Image size: 537475546 bytes. | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Started |
Started container openshift-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-oauth-apiserver |
multus |
apiserver-799c4c4c77-pmfl8 |
AddedInterface |
Add eth0 [10.130.0.48/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Created |
Created container openshift-apiserver | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e1ea7d1a5e79d5fd0e7e6cd18d3033b00767ac860d5c0390f2641a1faac6e214": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e1ea7d1a5e79d5fd0e7e6cd18d3033b00767ac860d5c0390f2641a1faac6e214: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver: client rate limiter Wait returned an error: context canceled | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-1 -n openshift-kube-apiserver because it was missing | |
| (x7) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 1 because static pod is ready |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Failed |
Error: ErrImagePull | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" in 3.552s (3.552s including waiting). Image size: 481662796 bytes. | |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulCreate |
Created pod: apiserver-77d45ddc66-sw2kd | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e1ea7d1a5e79d5fd0e7e6cd18d3033b00767ac860d5c0390f2641a1faac6e214" | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Failed |
Error: ImagePullBackOff | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-77d45ddc66 to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5f8dd75f5c to 0 from 1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." | |
openshift-kube-controller-manager |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4" | |
openshift-kube-controller-manager |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-apiserver |
replicaset-controller |
apiserver-5f8dd75f5c |
SuccessfulDelete |
Deleted pod: apiserver-5f8dd75f5c-s5f9w | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Failed |
Error: ImagePullBackOff | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-749f4b99b7-fqnd2_34364051-8647-4ed0-8ad9-e71ff0027088 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-7c885b8899-z89zf_2e83f909-2da2-4f7c-a7b8-fe75ed59caee became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nAPIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6f5fbdd644-x99cg pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-apiserver |
multus |
apiserver-77d45ddc66-sw2kd |
AddedInterface |
Add eth0 [10.128.0.20/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 2 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticator/kubeConfig"), + }, + "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 4 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45" |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f" |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" in 4.061s (4.061s including waiting). Image size: 537475546 bytes. | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Started |
Started container openshift-apiserver | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-etcd |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-85b957bbfc-dwcrh_3d1d5db4-5f76-4485-91ba-dc8f894164c1 became leader | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
multus |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.17/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" in 2.564s (2.564s including waiting). Image size: 496739801 bytes. | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
| (x2) | openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Started |
Started container insights-operator |
openshift-etcd |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" in 3.116s (3.116s including waiting). Image size: 500148391 bytes. | |
| (x2) | openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Created |
Created container insights-operator |
openshift-insights |
kubelet |
insights-operator-7c7bf5974-mt94h |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5b9e7d9b800f3edfd88efde26c1f252f6373852e486d8d23df953e97839431de" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-f67c66b4b to 1 from 2 | |
openshift-etcd |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-1 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulCreate |
Created pod: apiserver-77d45ddc66-mpqfk | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-77d45ddc66 to 2 from 1 | |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-f67c66b4b |
SuccessfulDelete |
Deleted pod: apiserver-f67c66b4b-tjp8m | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
| (x19) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod-0\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmap \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.18/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4" |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e1ea7d1a5e79d5fd0e7e6cd18d3033b00767ac860d5c0390f2641a1faac6e214" |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.19/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeControllerDegraded: All master nodes are ready",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db: Get "https://registry.build02.ci.openshift.org/openshift/token?scope=repository%3Aci-op-2fcpj5j6%2Fstable%3Apull": dial tcp 34.74.144.21:443: i/o timeout | |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Failed |
Error: ErrImagePull |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-9p6n5 |
Failed |
Error: ErrImagePull | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to update 3 node pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-92bwt | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Started |
Started container kube-rbac-proxy | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Started |
Started container olm-operator | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Failed |
Error: ErrImagePull | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Created |
Created container olm-operator | |
openshift-operator-lifecycle-manager |
package-server-manager-f7554d4b7-xd4h9_7e39493e-133b-4e09-817f-e9ff9a8f807b |
packageserver-controller-lock |
LeaderElection |
package-server-manager-f7554d4b7-xd4h9_7e39493e-133b-4e09-817f-e9ff9a8f807b became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Failed |
Error: ErrImagePull | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Failed |
Error: ErrImagePull | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-7497f58c94-vgnwd |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-784848ddf to 2 | |
| (x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-784848ddf |
SuccessfulCreate |
Created pod: packageserver-784848ddf-lw2pp | |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-784848ddf |
SuccessfulCreate |
Created pod: packageserver-784848ddf-lph6j | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f" in 19.734s (19.734s including waiting). Image size: 451420594 bytes. | |
openshift-machine-api |
cluster-autoscaler-operator-776f9d4bf4-dthxh_604539bc-9e1e-46c0-a84a-2cd5f8b7f305 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-776f9d4bf4-dthxh_604539bc-9e1e-46c0-a84a-2cd5f8b7f305 became leader | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Killing |
Stopping container multus-admission-controller | |
openshift-operator-lifecycle-manager |
multus |
packageserver-784848ddf-lw2pp |
AddedInterface |
Add eth0 [10.130.0.49/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lw2pp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lph6j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-operator-lifecycle-manager |
multus |
packageserver-784848ddf-lph6j |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4" in 10.75s (10.75s including waiting). Image size: 430949796 bytes. | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 1 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 static pod not found |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45" in 19.749s (19.749s including waiting). Image size: 438230001 bytes. | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed | |
openshift-multus |
kubelet |
multus-admission-controller-64669dd88c-zvr4t |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
replicaset-controller |
multus-admission-controller-64669dd88c |
SuccessfulDelete |
Deleted pod: multus-admission-controller-64669dd88c-zvr4t | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-64669dd88c to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lw2pp |
Created |
Created container packageserver | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-machine-api |
control-plane-machine-set-operator-7667c744f7-8tlf7_fc010622-a939-49cf-a875-2a8480284604 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7667c744f7-8tlf7_fc010622-a939-49cf-a875-2a8480284604 became leader | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-f67c66b4b-tjp8m |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lw2pp |
Started |
Started container packageserver | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Started |
Started container multus-admission-controller | |
openshift-kube-controller-manager |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container installer | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-marketplace |
multus |
redhat-marketplace-tmrr6 |
AddedInterface |
Add eth0 [10.130.0.50/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-skqd4 |
AddedInterface |
Add eth0 [10.129.0.20/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-marketplace |
multus |
certified-operators-l48ct |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.21/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
multus |
community-operators-vmqhx |
AddedInterface |
Add eth0 [10.130.0.51/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" in 23.855s (23.855s including waiting). Image size: 843845591 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Created |
Created container extract-utilities | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-machine-api |
deployment-controller |
machine-api-controllers |
ScalingReplicaSet |
Scaled up replica set machine-api-controllers-7785d897 to 1 | |
openshift-machine-api |
replicaset-controller |
machine-api-controllers-7785d897 |
SuccessfulCreate |
Created pod: machine-api-controllers-7785d897-m4jlj | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Created |
Created container extract-utilities | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Started |
Started container resizer-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Failed |
Error: ErrImagePull | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Created |
Created container resizer-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16066fbd55fd9fb04fb075ce1829ef5787d8155fdbe86e51f9e5cefdb0d8aafd" | |
openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" | |
openshift-machine-api |
multus |
machine-api-controllers-7785d897-m4jlj |
AddedInterface |
Add eth0 [10.129.0.22/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.23/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lph6j |
Created |
Created container packageserver | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lph6j |
Started |
Started container packageserver | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" in 8.633s (8.633s including waiting). Image size: 841241863 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-784848ddf-lph6j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" in 9.968s (9.968s including waiting). Image size: 841241863 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" in 7.363s (7.363s including waiting). Image size: 841241863 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]") | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container machineset-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container machineset-controller | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_ccd1e03a-f50d-4b94-995d-4d86692546cf |
cluster-api-provider-machineset-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_ccd1e03a-f50d-4b94-995d-4d86692546cf became leader | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" in 7.235s (7.235s including waiting). Image size: 843845591 bytes. | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2a18913ba70068e4f34dfd93992d0c71efbbf175fe66db130ad5880f9bb3b144" | |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Failed |
Error: ErrImagePull |
| (x2) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 10.966s (10.966s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Started |
Started container extract-content | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Created |
Created container cluster-monitoring-operator | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6645c9cbc-qpg45 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e1ea7d1a5e79d5fd0e7e6cd18d3033b00767ac860d5c0390f2641a1faac6e214" in 25.264s (25.264s including waiting). Image size: 464048596 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16066fbd55fd9fb04fb075ce1829ef5787d8155fdbe86e51f9e5cefdb0d8aafd" in 9.727s (9.727s including waiting). Image size: 440417872 bytes. | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 10.924s (10.924s including waiting). Image size: 1110357249 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Created |
Created container extract-content | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container machine-healthcheck-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" already present on machine | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_e371c44b-b47b-407f-91f5-14402f739205 |
cluster-api-provider-gcp-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_e371c44b-b47b-407f-91f5-14402f739205 became leader | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-79648c8fd6 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-79648c8fd6-swcgw | |
| (x2) | openshift-monitoring |
controllermanager |
prometheus-operator-admission-webhook |
NoPods |
No matching pods found |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-79648c8fd6 to 2 | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-client-ca -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/alert-relabel-configs -n openshift-monitoring because it was missing | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-79648c8fd6 |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-79648c8fd6-9gxqf | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringTelemeterClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-h8c2w" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-operator because it was missing | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container nodelink-controller | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container nodelink-controller | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-h8c2w" has been approved | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-k29c6" has been approved | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringTelemeterClientCertRequester is available | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing | |
| (x3) | openshift-machine-config-operator |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
(combined from similar events): Node ci-op-2fcpj5j6-f6035-2lklf-master-1 now has machineconfiguration.openshift.io/reason= |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2a18913ba70068e4f34dfd93992d0c71efbbf175fe66db130ad5880f9bb3b144" in 4.208s (4.208s including waiting). Image size: 546478107 bytes. | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringtelemeterclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-k29c6" is created for OpenShiftMonitoringTelemeterClientCertRequester | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container machine-healthcheck-controller | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 20.5s (20.5s including waiting). Image size: 536898687 bytes. | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container kube-rbac-proxy-machine-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container kube-rbac-proxy-machine-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container kube-rbac-proxy-mhc-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container kube-rbac-proxy-mhc-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Created |
Created container csi-driver | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container kube-rbac-proxy-machineset-mtrc | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container kube-rbac-proxy-machineset-mtrc | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Started |
Started container csi-driver | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_27c47702-18f5-47f6-abb9-dd5f9aaaa57d |
cluster-api-provider-nodelink-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_27c47702-18f5-47f6-abb9-dd5f9aaaa57d became leader | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_688500ac-76ce-44d3-89ce-49bf800eafcf |
cluster-api-provider-healthcheck-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_688500ac-76ce-44d3-89ce-49bf800eafcf became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-kube-apiserver |
kubelet |
installer-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container installer | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 7.128s (7.128s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Created |
Created container extract-content | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Failed |
Error: ImagePullBackOff |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
static-pod-installer |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-etcd |
static-pod-installer |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 13.417s (13.417s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 7.823s (7.823s including waiting). Image size: 896974229 bytes. | |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Failed |
Error: ImagePullBackOff |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 7.847s (7.847s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 6.893s (6.893s including waiting). Image size: 896974229 bytes. | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" in 4.416s (4.416s including waiting). Image size: 515033120 bytes. | |
openshift-etcd |
multus |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.24/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-kube-apiserver |
multus |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.25/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container guard | |
openshift-kube-apiserver |
kubelet |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container guard | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Failed |
Error: ImagePullBackOff |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcdctl | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-metrics | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 7.566s (7.566s including waiting). Image size: 896974229 bytes. | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd | |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it changed | |
| (x9) | openshift-controller-manager |
kubelet |
controller-manager-7f4b9d6458-ltvdx |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343" |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:14c8ddece7d7f397718c66c89d096692229698744ce52bb8afbfc6b5a0277e1f" |
| (x3) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e" |
| (x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
multus |
apiserver-77d45ddc66-mpqfk |
AddedInterface |
Add eth0 [10.130.0.52/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Started |
Started container openshift-apiserver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.31.1" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Started |
Started container csi-liveness-probe | |
openshift-machine-api |
machineset-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a |
ReconcileError |
error fetching machine type "e2-standard-4": error fetching machine type "e2-standard-4" in zone "us-central1-a": Get "https://compute.googleapis.com/compute/v1/projects/XXXXXXXXXXXXXXXXXXXXXXXX/zones/us-central1-a/machineTypes/e2-standard-4?alt=json&prettyPrint=false": oauth2: cannot fetch token: Post "https://oauth2.googleapis.com/token": dial tcp: lookup oauth2.googleapis.com: i/o timeout | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f: Get "https://registry.build02.ci.openshift.org/openshift/token?scope=repository%3Aci-op-2fcpj5j6%2Fstable%3Apull": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"kube-controller-manager" "1.31.1"} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Failed |
Error: ErrImagePull | |
openshift-kube-controller-manager |
static-pod-installer |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulCreate |
Created pod: apiserver-77d45ddc66-4mc2q | |
openshift-apiserver |
replicaset-controller |
apiserver-f67c66b4b |
SuccessfulDelete |
Deleted pod: apiserver-f67c66b4b-sppzf | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Killing |
Stopping container openshift-apiserver | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Failed |
Error: ImagePullBackOff |
openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.26/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" in 3.631s (3.631s including waiting). Image size: 487094132 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container guard | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_8f8235e8-5bd0-4ef4-a90b-fcfc2c403c3d became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
| (x4) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.3:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
| (x4) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.3:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.3:2380 | |
openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
Failed |
Error: ErrImagePull |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-f67c66b4b-sppzf pod)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.3:2380 | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-xpqvd |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" in 17.903s (17.903s including waiting). Image size: 484196808 bytes. | |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Created |
Created container ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Started |
Started container ingress-operator | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-ingress |
service-controller |
router-default |
EnsuringLoadBalancer |
Ensuring load balancer | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-bbcfc976b to 2 | |
openshift-ingress |
replicaset-controller |
router-default-bbcfc976b |
SuccessfulCreate |
Created pod: router-default-bbcfc976b-4r8cp | |
openshift-ingress |
replicaset-controller |
router-default-bbcfc976b |
SuccessfulCreate |
Created pod: router-default-bbcfc976b-xnpn7 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.ci-op-2fcpj5j6-f6035.gcp-\"...)},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX", "names":[]interface {}{"*.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX"}}} | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
| (x3) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Failed |
Error: ErrImagePull |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Failed |
Error: ErrImagePull |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-799c4c4c77-pmfl8 |
Failed |
Error: ImagePullBackOff |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 3 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 3",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 3") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 3 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
| (x6) | openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x6) | openshift-apiserver |
kubelet |
apiserver-f67c66b4b-sppzf |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-799c4c4c77 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ + string("https://10.0.0.3:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, ... // 3 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
| (x2) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
| (x3) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Failed |
Error: ErrImagePull |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.2dee09550bc489af | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-dcf867d89 to 1 from 0 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulCreate |
Created pod: apiserver-dcf867d89-zrhwj | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any{\n \t\"apiServerArguments\": map[string]any{\n \t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n \t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\t\"etcd-servers\": []any{\n+ \t\t\tstring(\"https://10.0.0.3:2379\"),\n \t\t\tstring(\"https://10.0.0.5:2379\"),\n \t\t\tstring(\"https://10.0.0.6:2379\"),\n \t\t},\n \t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"tls-min-version\": string(\"VersionTLS12\"),\n \t},\n }\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.5:2379,https://10.0.0.6:2379 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.5:2379,https://10.0.0.6:2379,https://localhost:2379 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-799c4c4c77 |
SuccessfulDelete |
Deleted pod: apiserver-799c4c4c77-pmfl8 | |
openshift-kube-controller-manager |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.53/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Failed |
Error: ImagePullBackOff |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-scripts -n openshift-etcd: cause by changes in data.etcd.env |
openshift-oauth-apiserver |
multus |
apiserver-dcf867d89-zrhwj |
AddedInterface |
Add eth0 [10.130.0.54/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" | |
openshift-machine-api |
machineset-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b |
ReconcileError |
error fetching machine type "e2-standard-4": error fetching machine type "e2-standard-4" in zone "us-central1-b": Get "https://compute.googleapis.com/compute/v1/projects/XXXXXXXXXXXXXXXXXXXXXXXX/zones/us-central1-b/machineTypes/e2-standard-4?alt=json&prettyPrint=false": oauth2: cannot fetch token: Post "https://oauth2.googleapis.com/token": dial tcp: lookup oauth2.googleapis.com on 172.30.0.10:53: read udp 10.129.0.22:47395->172.30.0.10:53: i/o timeout | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
multus |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.27/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" in 3.788s (3.788s including waiting). Image size: 475806593 bytes. | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343": initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout | |
| (x3) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Failed |
Error: ErrImagePull |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Created |
Created container fix-audit-permissions | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Started |
Started container oauth-apiserver | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_15d62170-335f-4c48-ae19-0fd7f6658401 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler | |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Failed |
Error: ErrImagePull |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Started |
Started container fix-audit-permissions | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 18.427s (18.427s including waiting). Image size: 396191352 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-92bwt |
Started |
Started container csi-node-driver-registrar | |
| (x3) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:14c8ddece7d7f397718c66c89d096692229698744ce52bb8afbfc6b5a0277e1f": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:14c8ddece7d7f397718c66c89d096692229698744ce52bb8afbfc6b5a0277e1f: pinging container registry registry.build02.ci.openshift.org: Get "https://registry.build02.ci.openshift.org/v2/": dial tcp 34.74.144.21:443: i/o timeout |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nGCPPDCSIDriverOperatorCRProgressing: GCPPDDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
static-pod-installer |
installer-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 2 | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to update pods" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing message changed from "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to update pods" to "GCPPDCSIDriverOperatorCRProgressing: GCPPDDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled down replica set gcp-pd-csi-driver-controller-78fcc99686 to 0 from 1 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container csi-driver | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-78fcc99686 |
SuccessfulDelete |
Deleted pod: gcp-pd-csi-driver-controller-78fcc99686-zgfxx | |
openshift-cluster-csi-drivers |
deployment-controller |
gcp-pd-csi-driver-controller |
ScalingReplicaSet |
Scaled up replica set gcp-pd-csi-driver-controller-745666687f to 2 from 1 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" | |
openshift-cluster-csi-drivers |
replicaset-controller |
gcp-pd-csi-driver-controller-745666687f |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-controller-745666687f-b5rxc | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-etcd |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.55/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulCreate |
Created pod: apiserver-dcf867d89-n6t8j | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulDelete |
Deleted pod: apiserver-6f5fbdd644-l2g2b | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6f5fbdd644 to 1 from 2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-dcf867d89 to 2 from 1 | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_06ffb765-2591-488d-97db-547e464523b1 became leader | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container provisioner-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" in 2.869s (2.869s including waiting). Image size: 444232272 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container csi-provisioner | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container provisioner-kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/serviceaccount-ca has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 3; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 3" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 3; 0 nodes have achieved new revision 4" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ + string("https://10.0.0.3:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), }, }, } | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.5:2379,https://10.0.0.6:2379 | |
| (x3) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
Failed |
Error: ImagePullBackOff |
openshift-etcd |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.31.1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"raw-internal" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"kube-apiserver" "1.31.1"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-kube-controller-manager |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container installer | |
openshift-apiserver |
replicaset-controller |
apiserver-5d5579f445 |
SuccessfulCreate |
Created pod: apiserver-5d5579f445-5twj5 | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.28/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16da45916745bbc9992c5dd8987d939db995f1f48b4f320911ccff9e3639d194" in 2.868s (2.868s including waiting). Image size: 441126441 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulDelete |
Deleted pod: apiserver-77d45ddc66-4mc2q | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x4) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0" |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container guard | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container attacher-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container csi-attacher | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container attacher-kube-rbac-proxy | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container guard | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-dns-operator |
kubelet |
dns-operator-79c9668d4f-5xbr8 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:14c8ddece7d7f397718c66c89d096692229698744ce52bb8afbfc6b5a0277e1f" |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-f67c66b4b-sppzf pod)" to "All is well",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" in 2.59s (2.59s including waiting). Image size: 440932380 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container csi-resizer | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 3, desired generation is 4." | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container resizer-kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Started |
Started container resizer-kube-rbac-proxy | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:16066fbd55fd9fb04fb075ce1829ef5787d8155fdbe86e51f9e5cefdb0d8aafd" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
multus |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.56/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it changed | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-etcd |
multus |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.57/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "required configmap/config has changed" | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-machine-api |
cluster-baremetal-operator-7648bf4f7c-nml8w_570621a0-75c4-4200-8d1a-9f6e6430d06e |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7648bf4f7c-nml8w_570621a0-75c4-4200-8d1a-9f6e6430d06e became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from True to False ("GCPPDCSIDriverOperatorCRProgressing: All is well") | |
openshift-apiserver |
multus |
apiserver-5d5579f445-5twj5 |
AddedInterface |
Add eth0 [10.129.0.29/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Started |
Started container openshift-apiserver | |
| (x4) | openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-585cd96855-j89wm |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:534f588669e06fbc4ece2277ff400d1a3bfd9af5b5efd80d422226fb1316b15e" |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulDelete |
Deleted pod: apiserver-77d45ddc66-mpqfk | |
openshift-ingress |
service-controller |
router-default |
EnsuredLoadBalancer |
Ensured load balancer | |
openshift-apiserver |
replicaset-controller |
apiserver-5d5579f445 |
SuccessfulCreate |
Created pod: apiserver-5d5579f445-zhg9c | |
| (x6) | openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set apiserver-5d5579f445 to 2 from 1 |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-cluster-machine-approver |
ci-op-2fcpj5j6-f6035-2lklf-master-1_8a0a5328-f407-456f-9d45-a1e2b0cfcadf |
cluster-machine-approver-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_8a0a5328-f407-456f-9d45-a1e2b0cfcadf became leader | |
openshift-kube-scheduler |
static-pod-installer |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-machine-api |
machineset-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c |
ReconcileError |
error fetching machine type "e2-standard-4": error fetching machine type "e2-standard-4" in zone "us-central1-c": Get "https://compute.googleapis.com/compute/v1/projects/XXXXXXXXXXXXXXXXXXXXXXXX/zones/us-central1-c/machineTypes/e2-standard-4?alt=json&prettyPrint=false": oauth2: cannot fetch token: Post "https://oauth2.googleapis.com/token": dial tcp: lookup oauth2.googleapis.com: i/o timeout | |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343" |
| (x5) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Failed |
Error: ImagePullBackOff |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-5b66777f7c-9pqmc_3dcd62a5-b46c-4681-a26d-e15d6d7167a0 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-5b66777f7c-9pqmc_3dcd62a5-b46c-4681-a26d-e15d6d7167a0 became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-8bh59 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p8rhd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" | |
kube-system |
Required control plane pods have been created | ||||
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-lc926 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lc926 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-p8rhd | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8bh59 |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8bh59 |
Started |
Started container tuned | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_802de4c3-3a5e-4034-b840-ac6007ebddf6 became leader | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-8bh59 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
| (x8) | openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p8rhd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" in 5.093s (5.093s including waiting). Image size: 680556885 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lc926 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" in 4.842s (4.842s including waiting). Image size: 680556885 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lc926 |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-lc926 |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p8rhd |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-p8rhd |
Started |
Started container tuned | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-l2g2b |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-dns-operator |
cluster-dns-operator |
dns-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-dgsqw | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-4lqzz | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-sstbc | |
openshift-dns |
kubelet |
dns-default-zmd45 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-cn9sx | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-5bx75 | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-zmd45 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-bootstrap |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
openshift-dns |
kubelet |
node-resolver-5bx75 |
Created |
Created container dns-node-resolver | |
openshift-dns |
multus |
dns-default-4lqzz |
AddedInterface |
Add eth0 [10.128.0.23/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
node-resolver-cn9sx |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-5bx75 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-dns |
kubelet |
node-resolver-dgsqw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-dns |
kubelet |
node-resolver-dgsqw |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
dns-default-sstbc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-dns |
kubelet |
node-resolver-5bx75 |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-dgsqw |
Created |
Created container dns-node-resolver | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-dns |
kubelet |
node-resolver-cn9sx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-dns |
kubelet |
node-resolver-cn9sx |
Created |
Created container dns-node-resolver | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-dns |
multus |
dns-default-sstbc |
AddedInterface |
Add eth0 [10.130.0.58/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-zmd45 |
AddedInterface |
Add eth0 [10.129.0.30/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 2.264s (2.264s including waiting). Image size: 464555200 bytes. | |
openshift-dns |
kubelet |
dns-default-sstbc |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-sstbc |
Created |
Created container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-sstbc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-dns |
kubelet |
dns-default-sstbc |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-sstbc |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-sstbc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 2.463s (2.463s including waiting). Image size: 464555200 bytes. | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Created |
Created container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-zmd45 |
Started |
Started container dns | |
| (x10) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-9dd9p |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-dns |
kubelet |
dns-default-4lqzz |
Started |
Started container kube-rbac-proxy | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-7c8c54f569-rsqg2_41fda9d2-5709-4205-ab2c-80396528bd94 became leader | |
openshift-dns |
kubelet |
dns-default-4lqzz |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 2.961s (2.961s including waiting). Image size: 464555200 bytes. | |
openshift-machine-api |
machineset-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-f |
ReconcileError |
error fetching machine type "e2-standard-4": error fetching machine type "e2-standard-4" in zone "us-central1-f": Get "https://compute.googleapis.com/compute/v1/projects/XXXXXXXXXXXXXXXXXXXXXXXX/zones/us-central1-f/machineTypes/e2-standard-4?alt=json&prettyPrint=false": oauth2: cannot fetch token: Post "https://oauth2.googleapis.com/token": dial tcp: lookup oauth2.googleapis.com on 172.30.0.10:53: read udp 10.129.0.22:36167->172.30.0.10:53: i/o timeout | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
| (x10) | openshift-route-controller-manager |
kubelet |
route-controller-manager-7b877984c7-fvxj4 |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
| (x7) | openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
multus |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.31/23] from ovn-kubernetes | |
| (x7) | openshift-apiserver |
kubelet |
apiserver-77d45ddc66-mpqfk |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-oauth-apiserver |
multus |
apiserver-dcf867d89-n6t8j |
AddedInterface |
Add eth0 [10.128.0.24/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Created |
Created container fix-audit-permissions | |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Created |
Created container oauth-apiserver | |
| (x10) | openshift-controller-manager |
kubelet |
controller-manager-78b7d7d855-kpv7q |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulCreate |
Created pod: apiserver-dcf867d89-xktv2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-dcf867d89 to 3 from 2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6f5fbdd644 to 0 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6f5fbdd644 |
SuccessfulDelete |
Deleted pod: apiserver-6f5fbdd644-gqhhc | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x33) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 10.0.0.5 |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-786b85b959-zrm7s_e73dfb89-7b05-44aa-b248-4afda833785f became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" in 6.84s (6.84s including waiting). Image size: 897148932 bytes. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-d8db88b9d to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-86cf9fc757 |
SuccessfulCreate |
Created pod: controller-manager-86cf9fc757-rf8dk | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7b877984c7-9dd9p | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulCreate |
Created pod: controller-manager-5f544c54d7-vlsx8 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7b877984c7 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-f4fb8bb6c to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7b877984c7 to 1 from 2 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-86cf9fc757 to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78b7d7d855 |
SuccessfulDelete |
Deleted pod: controller-manager-78b7d7d855-kpv7q | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-78b7d7d855 to 0 from 1 | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-f4fb8bb6c |
SuccessfulCreate |
Created pod: route-controller-manager-f4fb8bb6c-xr2n6 | |
| (x2) | openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set controller-manager-5f544c54d7 to 1 from 0 |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7b877984c7 |
SuccessfulDelete |
Deleted pod: route-controller-manager-7b877984c7-fvxj4 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulCreate |
Created pod: route-controller-manager-d8db88b9d-58sj4 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7f4b9d6458 |
SuccessfulDelete |
Deleted pod: controller-manager-7f4b9d6458-ltvdx | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container guard | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.59/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container guard | |
openshift-route-controller-manager |
multus |
route-controller-manager-f4fb8bb6c-xr2n6 |
AddedInterface |
Add eth0 [10.130.0.60/23] from ovn-kubernetes | |
openshift-controller-manager |
multus |
controller-manager-86cf9fc757-rf8dk |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3" | |
openshift-controller-manager |
kubelet |
controller-manager-86cf9fc757-rf8dk |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" | |
openshift-etcd |
static-pod-installer |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4fb8bb6c-xr2n6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-bootstrap_94fd7365-423a-4c70-aff5-fbcfd96937bb stopped leading | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container cluster-policy-controller | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
kube-system |
Required control plane pods have been created | ||||
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" in 3.181s (3.181s including waiting). Image size: 487094132 bytes. | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-route-controller-manager |
multus |
route-controller-manager-d8db88b9d-58sj4 |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 0, desired replicas is 3" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-controller-manager |
kubelet |
controller-manager-86cf9fc757-rf8dk |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" in 3.248s (3.248s including waiting). Image size: 540027783 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-86cf9fc757-rf8dk |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-86cf9fc757-rf8dk |
Started |
Started container controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-86cf9fc757-rf8dk became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4fb8bb6c-xr2n6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" in 3.402s (3.403s including waiting). Image size: 467457423 bytes. | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4fb8bb6c-xr2n6 |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4fb8bb6c-xr2n6 |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-f4fb8bb6c-xr2n6_b5a5d6f8-697f-4630-b6f8-18585955b396 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" in 3.075s (3.075s including waiting). Image size: 467457423 bytes. | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" in 3.246s (3.246s including waiting). Image size: 515033120 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it changed | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodCreated |
Created Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-metrics | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
PodUpdated |
Updated Pod/etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-etcd because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.4:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: Get "https://10.0.0.4:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-3-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 3 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "required configmap/sa-token-signing-certs has changed" | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
| (x8) | openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-6f5fbdd644-gqhhc |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b59c47496-mgxqd_openshift-controller-manager_69917f88-069b-4879-9c24-a1698a1948e0_0(f7f2bc0d2201ba96cbb1aa63d172c56a1389d4f4406739411939037451ae1d64): error adding pod openshift-controller-manager_controller-manager-6b59c47496-mgxqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f7f2bc0d2201ba96cbb1aa63d172c56a1389d4f4406739411939037451ae1d64" Netns:"/var/run/netns/809c2462-99dd-4f2a-b694-5628f3d83962" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b59c47496-mgxqd;K8S_POD_INFRA_CONTAINER_ID=f7f2bc0d2201ba96cbb1aa63d172c56a1389d4f4406739411939037451ae1d64;K8S_POD_UID=69917f88-069b-4879-9c24-a1698a1948e0" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b59c47496-mgxqd] networking: Multus: [openshift-controller-manager/controller-manager-6b59c47496-mgxqd/69917f88-069b-4879-9c24-a1698a1948e0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6b59c47496-mgxqd?timeout=1m0s": http2: client connection lost ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-dc88f967c-cfpfn_openshift-route-controller-manager_6da3fb87-2e65-4733-9bf3-972e3a6f365f_0(7217730fb3a52b23f3ac57fcb81154b25866726ad0445964708027c5a5752ebe): error adding pod openshift-route-controller-manager_route-controller-manager-dc88f967c-cfpfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7217730fb3a52b23f3ac57fcb81154b25866726ad0445964708027c5a5752ebe" Netns:"/var/run/netns/26422cf0-1248-4624-bf17-e4001c225e2a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-dc88f967c-cfpfn;K8S_POD_INFRA_CONTAINER_ID=7217730fb3a52b23f3ac57fcb81154b25866726ad0445964708027c5a5752ebe;K8S_POD_UID=6da3fb87-2e65-4733-9bf3-972e3a6f365f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn/6da3fb87-2e65-4733-9bf3-972e3a6f365f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dc88f967c-cfpfn?timeout=1m0s": http2: client connection lost ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberAddAsLearner |
successfully added new member https://10.0.0.4:2380 | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-vfzts" : failed to fetch token: Post "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token": http2: client connection lost | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 4 because static pod is ready | |
openshift-marketplace |
kubelet |
certified-operators-l48ct |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-vmqhx |
Killing |
Stopping container registry-server | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-marketplace |
kubelet |
redhat-marketplace-tmrr6 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-controller-manager |
multus |
controller-manager-5f544c54d7-vlsx8 |
AddedInterface |
Add eth0 [10.130.0.61/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Created |
Created container extract-utilities | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
certified-operators-44t92 |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Started |
Started container extract-utilities | |
openshift-apiserver |
multus |
apiserver-5d5579f445-zhg9c |
AddedInterface |
Add eth0 [10.130.0.62/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
community-operators-wj7jh |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-ctnwj |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Started |
Started container fix-audit-permissions | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
multus |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.63/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container guard | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Started |
Started container openshift-apiserver |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 2.68s (2.68s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Created |
Created container openshift-apiserver |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 2.679s (2.679s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" in 3.204s (3.204s including waiting). Image size: 540027783 bytes. | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Created |
Created container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Started |
Started container openshift-apiserver-check-endpoints |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
Created |
Created container controller-manager | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.509s (1.509s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.52s (1.52s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-44t92 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 4.545s (4.545s including waiting). Image size: 1110357249 bytes. | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-ctnwj |
Created |
Created container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 4 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 static pod not found | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 914ms (914ms including waiting). Image size: 896974229 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Started |
Started container registry-server | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" | |
openshift-kube-controller-manager |
multus |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_851e928f-b807-4782-8697-27c3f40c50e1 became leader | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-5d5579f445-zhg9c_openshift-apiserver(77b9b08b-7004-4742-b247-6b860dace2e4) |
| (x5) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-5d5579f445-zhg9c_openshift-apiserver(77b9b08b-7004-4742-b247-6b860dace2e4) |
openshift-controller-manager |
replicaset-controller |
controller-manager-6b59c47496 |
SuccessfulDelete |
Deleted pod: controller-manager-6b59c47496-mgxqd | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" in 3.935s (3.935s including waiting). Image size: 481662796 bytes. | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-dc88f967c |
SuccessfulDelete |
Deleted pod: route-controller-manager-dc88f967c-cfpfn | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulCreate |
Created pod: route-controller-manager-d8db88b9d-2pwz8 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-dc88f967c to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5f544c54d7 to 2 from 1 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3",Available changed from False to True ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-6b59c47496 to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-d8db88b9d to 2 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulCreate |
Created pod: controller-manager-5f544c54d7-4lmsc | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
multus |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberPromote |
successfully promoted learner member https://10.0.0.4:2380 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
FailedCreate |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x: reconciler failed to Create machine: requeue in: 20s | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 5: Delete "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/encryption-config-5": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 5: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreateFailed |
Failed to create Secret/service-account-private-key-5 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-zhg9c |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-config-controller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b59c47496-mgxqd_openshift-controller-manager_69917f88-069b-4879-9c24-a1698a1948e0_0(0d6ee2db3131cd4f56c0382aefe4af7a80c5de8c3b44831c62bf570fcc192060): error adding pod openshift-controller-manager_controller-manager-6b59c47496-mgxqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0d6ee2db3131cd4f56c0382aefe4af7a80c5de8c3b44831c62bf570fcc192060" Netns:"/var/run/netns/3cbe8e0a-b275-4976-bd71-1fed88e8a3ad" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b59c47496-mgxqd;K8S_POD_INFRA_CONTAINER_ID=0d6ee2db3131cd4f56c0382aefe4af7a80c5de8c3b44831c62bf570fcc192060;K8S_POD_UID=69917f88-069b-4879-9c24-a1698a1948e0" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b59c47496-mgxqd] networking: Multus: [openshift-controller-manager/controller-manager-6b59c47496-mgxqd/69917f88-069b-4879-9c24-a1698a1948e0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6b59c47496-mgxqd?timeout=1m0s": dial tcp 10.0.0.2:6443: i/o timeout ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-dc88f967c-cfpfn_openshift-route-controller-manager_6da3fb87-2e65-4733-9bf3-972e3a6f365f_0(4ff99c3a53664dd4ef84f2b8e16c3423a941f9d4ae748639f15f442f6074eed2): error adding pod openshift-route-controller-manager_route-controller-manager-dc88f967c-cfpfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4ff99c3a53664dd4ef84f2b8e16c3423a941f9d4ae748639f15f442f6074eed2" Netns:"/var/run/netns/7f764580-c65f-48b0-b2bf-4f6156ab2e7c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-dc88f967c-cfpfn;K8S_POD_INFRA_CONTAINER_ID=4ff99c3a53664dd4ef84f2b8e16c3423a941f9d4ae748639f15f442f6074eed2;K8S_POD_UID=6da3fb87-2e65-4733-9bf3-972e3a6f365f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn/6da3fb87-2e65-4733-9bf3-972e3a6f365f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dc88f967c-cfpfn?timeout=1m0s": dial tcp 10.0.0.2:6443: i/o timeout ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
FailedCreate |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2: reconciler failed to Create machine: requeue in: 20s | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-dc88f967c-cfpfn_openshift-route-controller-manager_6da3fb87-2e65-4733-9bf3-972e3a6f365f_0(eaf9415a125a65beaaeb8efce088e5a33dfc0254a127c9e51944a0e2a69f7203): error adding pod openshift-route-controller-manager_route-controller-manager-dc88f967c-cfpfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"eaf9415a125a65beaaeb8efce088e5a33dfc0254a127c9e51944a0e2a69f7203" Netns:"/var/run/netns/22b66e52-c446-4ae2-82de-38e24ebff05d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-dc88f967c-cfpfn;K8S_POD_INFRA_CONTAINER_ID=eaf9415a125a65beaaeb8efce088e5a33dfc0254a127c9e51944a0e2a69f7203;K8S_POD_UID=6da3fb87-2e65-4733-9bf3-972e3a6f365f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn/6da3fb87-2e65-4733-9bf3-972e3a6f365f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dc88f967c-cfpfn?timeout=1m0s": dial tcp 10.0.0.2:6443: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
FailedCreate |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz: reconciler failed to Create machine: requeue in: 20s | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x3) | openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
FailedUpdate |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz: reconciler failed to Update machine: requeue in: 20s |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b59c47496-mgxqd_openshift-controller-manager_69917f88-069b-4879-9c24-a1698a1948e0_0(866a3f384cf209492fea7dfece02cdc9f7b0b517f840698533ddcc8ca21c0d1e): error adding pod openshift-controller-manager_controller-manager-6b59c47496-mgxqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"866a3f384cf209492fea7dfece02cdc9f7b0b517f840698533ddcc8ca21c0d1e" Netns:"/var/run/netns/9447d1bd-e9a3-4e51-8f16-c0fae16cca1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b59c47496-mgxqd;K8S_POD_INFRA_CONTAINER_ID=866a3f384cf209492fea7dfece02cdc9f7b0b517f840698533ddcc8ca21c0d1e;K8S_POD_UID=69917f88-069b-4879-9c24-a1698a1948e0" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b59c47496-mgxqd] networking: Multus: [openshift-controller-manager/controller-manager-6b59c47496-mgxqd/69917f88-069b-4879-9c24-a1698a1948e0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6b59c47496-mgxqd?timeout=1m0s": dial tcp 10.0.0.2:6443: i/o timeout ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Started |
Started container ovnkube-cluster-manager |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Created |
Created container ovnkube-cluster-manager |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-zpzjn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-dc88f967c-cfpfn_openshift-route-controller-manager_6da3fb87-2e65-4733-9bf3-972e3a6f365f_0(ef0a6ae3578d15a666c11441f5394333382195605ad99cb4b4ef133e81b67d44): error adding pod openshift-route-controller-manager_route-controller-manager-dc88f967c-cfpfn to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ef0a6ae3578d15a666c11441f5394333382195605ad99cb4b4ef133e81b67d44" Netns:"/var/run/netns/83568791-1b6d-4ec3-8f34-31dee1ad6433" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-dc88f967c-cfpfn;K8S_POD_INFRA_CONTAINER_ID=ef0a6ae3578d15a666c11441f5394333382195605ad99cb4b4ef133e81b67d44;K8S_POD_UID=6da3fb87-2e65-4733-9bf3-972e3a6f365f" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn] networking: Multus: [openshift-route-controller-manager/route-controller-manager-dc88f967c-cfpfn/6da3fb87-2e65-4733-9bf3-972e3a6f365f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-dc88f967c-cfpfn in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-dc88f967c-cfpfn?timeout=1m0s": dial tcp 10.0.0.2:6443: i/o timeout ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x14) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
Failed to create installer pod for revision 5 count 0 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-2": dial tcp 172.30.0.1:443: connect: connection refused |
| (x5) | openshift-route-controller-manager |
multus |
route-controller-manager-dc88f967c-cfpfn |
AddedInterface |
Add eth0 [10.129.0.13/23] from ovn-kubernetes |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b59c47496-mgxqd_openshift-controller-manager_69917f88-069b-4879-9c24-a1698a1948e0_0(3c0c32a4872bbc8414e2f4bcc8d7e7d5a541f1655507ea321138334a45e9c2db): error adding pod openshift-controller-manager_controller-manager-6b59c47496-mgxqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3c0c32a4872bbc8414e2f4bcc8d7e7d5a541f1655507ea321138334a45e9c2db" Netns:"/var/run/netns/a9f9c7cb-d724-4e9d-a2c1-991b533526ff" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b59c47496-mgxqd;K8S_POD_INFRA_CONTAINER_ID=3c0c32a4872bbc8414e2f4bcc8d7e7d5a541f1655507ea321138334a45e9c2db;K8S_POD_UID=69917f88-069b-4879-9c24-a1698a1948e0" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b59c47496-mgxqd] networking: Multus: [openshift-controller-manager/controller-manager-6b59c47496-mgxqd/69917f88-069b-4879-9c24-a1698a1948e0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6b59c47496-mgxqd?timeout=1m0s": dial tcp 10.0.0.2:6443: i/o timeout ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-6b59c47496-mgxqd_openshift-controller-manager_69917f88-069b-4879-9c24-a1698a1948e0_0(0be7bb7a6753076e3c24568832f678e44d6d6720618590b6331055b2642abcda): error adding pod openshift-controller-manager_controller-manager-6b59c47496-mgxqd to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0be7bb7a6753076e3c24568832f678e44d6d6720618590b6331055b2642abcda" Netns:"/var/run/netns/862c2bba-e535-4e82-a1ca-5515541c7c75" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-6b59c47496-mgxqd;K8S_POD_INFRA_CONTAINER_ID=0be7bb7a6753076e3c24568832f678e44d6d6720618590b6331055b2642abcda;K8S_POD_UID=69917f88-069b-4879-9c24-a1698a1948e0" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-6b59c47496-mgxqd] networking: Multus: [openshift-controller-manager/controller-manager-6b59c47496-mgxqd/69917f88-069b-4879-9c24-a1698a1948e0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-6b59c47496-mgxqd in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-6b59c47496-mgxqd?timeout=1m0s": dial tcp 10.0.0.2:6443: connect: connection refused ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
| (x14) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 0 on node "ci-op-2fcpj5j6-f6035-2lklf-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x6) | openshift-controller-manager |
multus |
controller-manager-6b59c47496-mgxqd |
AddedInterface |
Add eth0 [10.129.0.7/23] from ovn-kubernetes |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" in 2.448s (2.448s including waiting). Image size: 467457423 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" in 3.35s (3.35s including waiting). Image size: 540027783 bytes. | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
Created |
Created container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Started |
Started container route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
Started |
Started container controller-manager | |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 0 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-2": dial tcp 172.30.0.1:443: connect: connection refused |
| (x14) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
| (x15) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 5: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_77cfffe2-c943-41b4-b25c-c3916b73ae9f became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.13:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" architecture="amd64" | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
ProbeError |
Readiness probe error: Get "https://10.129.0.13:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x15) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 5: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
ProbeError |
Readiness probe error: Get "https://10.129.0.13:8443/healthz": read tcp 10.129.0.2:56606->10.129.0.13:8443: read: connection reset by peer body: | |
openshift-controller-manager |
kubelet |
controller-manager-6b59c47496-mgxqd |
Killing |
Stopping container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Killing |
Stopping container route-controller-manager | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_b242ccae-227e-40ca-9a9c-41100435d9c4 became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-dc88f967c-cfpfn |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.13:8443/healthz": read tcp 10.129.0.2:56606->10.129.0.13:8443: read: connection reset by peer | |
openshift-marketplace |
kubelet |
redhat-operators-skqd4 |
Killing |
Stopping container registry-server | |
| (x14) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
Failed to create installer pod for revision 5 count 0 on node "ci-op-2fcpj5j6-f6035-2lklf-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-kube-apiserver |
multus |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.32/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Created |
Created container extract-utilities | |
openshift-oauth-apiserver |
multus |
apiserver-dcf867d89-xktv2 |
AddedInterface |
Add eth0 [10.129.0.35/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Started |
Started container extract-utilities | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Created |
Created container fix-audit-permissions | |
openshift-kube-scheduler |
multus |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.33/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-operators-9ll4n |
AddedInterface |
Add eth0 [10.129.0.34/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Created |
Created container oauth-apiserver | |
openshift-route-controller-manager |
multus |
route-controller-manager-d8db88b9d-2pwz8 |
AddedInterface |
Add eth0 [10.129.0.37/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-cluster-csi-drivers |
external-snapshotter-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-1 |
external-snapshotter-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1 became leader | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-controller-manager |
multus |
controller-manager-5f544c54d7-4lmsc |
AddedInterface |
Add eth0 [10.129.0.36/23] from ovn-kubernetes | |
openshift-cluster-csi-drivers |
pd.csi.storage.gke.io/1729775144253-2028-pd.csi.storage.gke.io |
pd-csi-storage-gke-io |
LeaderElection |
1729775144253-2028-pd-csi-storage-gke-io became leader | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 2.265s (2.265s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.218s (1.218s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-9ll4n |
Started |
Started container registry-server | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
| (x3) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Started |
Started container marketplace-operator |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
ProbeError |
Readiness probe error: Get "http://10.130.0.30:8080/healthz": dial tcp 10.130.0.30:8080: connect: connection refused body: | |
openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Unhealthy |
Readiness probe failed: Get "http://10.130.0.30:8080/healthz": dial tcp 10.130.0.30:8080: connect: connection refused | |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-54656c84bd-cn29j became leader | |
openshift-cluster-csi-drivers |
external-attacher-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-1 |
external-attacher-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_961d1609-f2d7-4cff-b5c9-e9b02d56c71f became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-9bd7f8667-lfs5z_9ae673bf-6377-4c3b-815f-1c78cd4e9b91 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-5b66777f7c-9pqmc_033b2984-90b9-4a1f-888b-bc101f28455c |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-5b66777f7c-9pqmc_033b2984-90b9-4a1f-888b-bc101f28455c became leader | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
static-pod-installer |
installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-786b85b959-zrm7s_10cc68ed-c8e5-4a4b-b433-0d745f0b8aa5 became leader | |
openshift-cluster-csi-drivers |
external-resizer-pd-csi-storage-gke-io/ci-op-2fcpj5j6-f6035-2lklf-master-2 |
external-resizer-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
static-pod-installer |
installer-4-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
| (x9) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_cc069e8c-90d0-48f7-9f1c-3dbd1de87993 became leader | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_5e3544f1-2cb8-4a24-95de-a14cb63e22a9 became leader | |
| (x18) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-f4fb8bb6c to 0 from 1 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-f4fb8bb6c-xr2n6 |
Killing |
Stopping container route-controller-manager | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-f4fb8bb6c |
SuccessfulDelete |
Deleted pod: route-controller-manager-f4fb8bb6c-xr2n6 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5f544c54d7 to 3 from 2 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-86cf9fc757 to 0 from 1 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-86cf9fc757 |
SuccessfulDelete |
Deleted pod: controller-manager-86cf9fc757-rf8dk | |
openshift-controller-manager |
kubelet |
controller-manager-86cf9fc757-rf8dk |
Killing |
Stopping container controller-manager | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_1d6d360c-921c-4c1e-b1d3-23332d840c2b became leader | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-d8db88b9d to 3 from 2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulCreate |
Created pod: controller-manager-5f544c54d7-7l2qd | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"oauth-apiserver" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulCreate |
Created pod: route-controller-manager-d8db88b9d-rcc6x | |
| (x8) | default |
machineapioperator |
machine-api |
Status degraded |
minimum worker replica count (2) not yet met: current running replicas 0, waiting for [ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz] |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml |
openshift-route-controller-manager |
multus |
route-controller-manager-d8db88b9d-rcc6x |
AddedInterface |
Add eth0 [10.130.0.64/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-7l2qd |
Created |
Created container controller-manager | |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-7l2qd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-5f544c54d7-7l2qd |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-7l2qd |
Started |
Started container controller-manager | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-d8db88b9d-2pwz8_0562fb12-9750-45fd-b22b-79b8febca7bf became leader | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5d5579f445 to 1 from 2 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 6 triggered by "required configmap/etcd-endpoints has changed" | |
openshift-apiserver |
replicaset-controller |
apiserver-6d7dbc56c5 |
SuccessfulCreate |
Created pod: apiserver-6d7dbc56c5-jl6d4 | |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5f544c54d7-4lmsc became leader | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d7dbc56c5 to 1 from 0 | |
openshift-apiserver |
replicaset-controller |
apiserver-5d5579f445 |
SuccessfulDelete |
Deleted pod: apiserver-5d5579f445-zhg9c | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed |
| (x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/restore-etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml,data.quorum-restore-pod.yaml |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-6d7dbc56c5-jl6d4 |
AddedInterface |
Add eth0 [10.130.0.65/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Created |
Created container fix-audit-permissions | |
| (x18) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 5 triggered by "required secret/localhost-recovery-client-token has changed" |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Created |
Created container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Started |
Started container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Started |
Started container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Created |
Created container openshift-apiserver-check-endpoints |
| (x3) | openshift-machine-api |
gcpcontroller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Update |
Updated Machine ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "MachineDeletionHooksControllerDegraded: Operation cannot be fulfilled on machines.machine.openshift.io \"ci-op-2fcpj5j6-f6035-2lklf-master-2\": the object has been modified; please apply your changes to the latest version and try again\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:11.245859 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:21.246355 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:31.244720 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:41.245006 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:51.245434 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:10:01.245005 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:10:01.246460 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:10:01.246502 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: \nEtcdMembersDegraded: No unhealthy members found",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5" to "NodeInstallerProgressing: 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 1; 1 node is at revision 3; 1 node is at revision 5; 0 nodes have achieved new revision 6\nEtcdMembersAvailable: 4 members are available" | |
| (x8) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379,https://10.0.0.6:2379 |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 2 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ string("https://10.0.0.3:2379"), + string("https://10.0.0.4:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), }, }, } |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-6d7dbc56c5-jl6d4_openshift-apiserver(3a1b2709-fdcf-456b-b70e-d1cf7765ad41) |
| (x5) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-6d7dbc56c5-jl6d4_openshift-apiserver(3a1b2709-fdcf-456b-b70e-d1cf7765ad41) |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-0_2889f7c0-0c50-43b4-a6c9-a7924e059ff1 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_2889f7c0-0c50-43b4-a6c9-a7924e059ff1 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "required secret/localhost-recovery-client-token has changed" | |
| (x6) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
| (x19) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine |
| (x20) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379,https://10.0.0.6:2379,https://localhost:2379 |
| (x20) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://10.0.0.3:2379"), + string("https://10.0.0.4:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, ... // 3 identical entries }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 6 triggered by "secret \"localhost-recovery-serving-certkey-5\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: " map[string]any{\n \t\"apiServerArguments\": map[string]any{\n \t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n \t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\t\"etcd-servers\": []any{\n \t\t\tstring(\"https://10.0.0.3:2379\"),\n+ \t\t\tstring(\"https://10.0.0.4:2379\"),\n \t\t\tstring(\"https://10.0.0.5:2379\"),\n \t\t\tstring(\"https://10.0.0.6:2379\"),\n \t\t},\n \t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"tls-min-version\": string(\"VersionTLS12\"),\n \t},\n }\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.5:2379,https://10.0.0.6:2379 | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.38/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-8bdbc6bbb to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-dcf867d89 to 2 from 3 | |
openshift-kube-scheduler |
multus |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.39/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulDelete |
Deleted pod: apiserver-dcf867d89-xktv2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,service-account-private-key-5\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Created |
Created container route-controller-manager |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Started |
Started container route-controller-manager |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulCreate |
Created pod: apiserver-8bdbc6bbb-8ndgb | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
ProbeError |
Readiness probe error: Get "https://10.130.0.64:8443/healthz": read tcp 10.130.0.2:58008->10.130.0.64:8443: read: connection reset by peer body: | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.64:8443/healthz": read tcp 10.130.0.2:58008->10.130.0.64:8443: read: connection reset by peer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: ",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 6 triggered by "secret \"service-account-private-key-5\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.") | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.66/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" | |
| (x7) | default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
NodeHasSufficientPID |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x status is now: NodeHasSufficientPID |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
multus |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-serving-certkey-5 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-serving-certkey-5" | |
| (x8) | default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
NodeHasNoDiskPressure |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x status is now: NodeHasNoDiskPressure |
| (x8) | default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
NodeHasSufficientMemory |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x status is now: NodeHasSufficientMemory |
| (x28) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-5,localhost-recovery-serving-certkey-5 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-6-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" in 2.948s (2.948s including waiting). Image size: 479171827 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 7 triggered by "secret \"service-account-private-key-6\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 6 triggered by "secret \"service-account-private-key-5\" not found" | |
| (x8) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.64:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Started |
Started container setup | |
| (x20) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-5,service-account-private-key-5 |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Created |
Created container setup | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-5,service-account-private-key-5\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-6,service-account-private-key-6\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: ",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 6" | |
openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" in 4.748s (4.748s including waiting). Image size: 437909244 bytes. | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-serving-certkey-5" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-etcd |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-7 -n openshift-kube-controller-manager because it was missing | |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
ProbeError |
Readiness probe error: Get "https://10.130.0.64:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 6 triggered by "secret \"localhost-recovery-serving-certkey-5\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 7 triggered by "secret \"service-account-private-key-6\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-7,service-account-private-key-7\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: service-account-private-key-7\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-7,service-account-private-key-7 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
secrets: service-account-private-key-7 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-6,service-account-private-key-6\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: localhost-recovery-client-token-7,service-account-private-key-7\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: ",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 7" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 8 triggered by "secret \"service-account-private-key-7\" not found" | |
| (x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-6,service-account-private-key-6 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: service-account-private-key-7\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:14.627959 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:24.627766 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:34.627494 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:44.628050 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.627778 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:09:54.629332 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-2: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:09:54.629375 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: service-account-private-key-7" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: secrets: service-account-private-key-7" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-7 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/cluster-policy-controller-config-8 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kube-apiserver-cert-syncer-kubeconfig-7 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineOSBuilderFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: failed to apply machine os builder manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/machine-os-builder": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreateFailed |
Failed to create Pod/installer-5-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 5 count 0 on node "ci-op-2fcpj5j6-f6035-2lklf-master-0": Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreateFailed |
Failed to create Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 7 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-xktv2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x8) | default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
NodeHasNoDiskPressure |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz status is now: NodeHasNoDiskPressure |
| (x8) | default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
NodeHasSufficientMemory |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz status is now: NodeHasSufficientMemory |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Started |
Started container kube-rbac-proxy-crio |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Created |
Created container kube-rbac-proxy-crio |
| (x13) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 8: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x13) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 7 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2": dial tcp 172.30.0.1:443: connect: connection refused |
| (x14) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 7: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
| (x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Put "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/service-account-private-key": dial tcp 172.30.0.1:443: connect: connection refused |
| (x4) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Started |
Started container kube-rbac-proxy-crio |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8bdbc6bbb-8ndgb_openshift-oauth-apiserver_a39a6c4b-3a37-4615-ab4f-7920f69abe8f_0(3c35736de649910d69cd9d033a1d2e9388033fafe812d24c59012c5d69e9468f): error adding pod openshift-oauth-apiserver_apiserver-8bdbc6bbb-8ndgb to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3c35736de649910d69cd9d033a1d2e9388033fafe812d24c59012c5d69e9468f" Netns:"/var/run/netns/06cdfb7d-27d7-42f3-85d5-71900dc74138" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-8bdbc6bbb-8ndgb;K8S_POD_INFRA_CONTAINER_ID=3c35736de649910d69cd9d033a1d2e9388033fafe812d24c59012c5d69e9468f;K8S_POD_UID=a39a6c4b-3a37-4615-ab4f-7920f69abe8f" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-8bdbc6bbb-8ndgb] networking: Multus: [openshift-oauth-apiserver/apiserver-8bdbc6bbb-8ndgb/a39a6c4b-3a37-4615-ab4f-7920f69abe8f]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod apiserver-8bdbc6bbb-8ndgb in out of cluster comm: pod "apiserver-8bdbc6bbb-8ndgb" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x_openshift-machine-config-operator(c0fe0a5986360f679c7b11d1af7e66a3) |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_4bdf6123-2237-495d-acbd-cf089b5f9423 became leader | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Created |
Created container fix-audit-permissions | |
| (x2) | openshift-oauth-apiserver |
multus |
apiserver-8bdbc6bbb-8ndgb |
AddedInterface |
Add eth0 [10.129.0.40/23] from ovn-kubernetes |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulDelete |
Deleted pod: apiserver-dcf867d89-n6t8j | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulCreate |
Created pod: apiserver-8bdbc6bbb-txb89 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-dcf867d89 to 1 from 2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-8bdbc6bbb to 2 from 1 | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Killing |
Stopping container oauth-apiserver | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-8 -n openshift-kube-controller-manager because it was missing | |
| (x14) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 8 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-8 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_e9a56a7d-531d-4958-815e-771d75c37acd became leader | |
| (x15) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 7 triggered by "required configmap/config has changed" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_961dadf4-e540-48f3-a338-b7e5b6e2fa1c_0(747f2a83db9a5b842039a43247942b7d64b49771f0c23f78e0f0a1fed234a74e): error adding pod openshift-kube-controller-manager_installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"747f2a83db9a5b842039a43247942b7d64b49771f0c23f78e0f0a1fed234a74e" Netns:"/var/run/netns/8c338c0e-2211-4542-a09d-81b1da236e54" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=747f2a83db9a5b842039a43247942b7d64b49771f0c23f78e0f0a1fed234a74e;K8S_POD_UID=961dadf4-e540-48f3-a338-b7e5b6e2fa1c" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/961dadf4-e540-48f3-a338-b7e5b6e2fa1c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-etcd_233f6653-f182-444e-89d6-e459ff59b900_0(cc1585d2830b3cdb5aad7602dc0c748d9bd5545fa21fc6e6d34fd0368d8115c4): error adding pod openshift-etcd_installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"cc1585d2830b3cdb5aad7602dc0c748d9bd5545fa21fc6e6d34fd0368d8115c4" Netns:"/var/run/netns/a41ebfcf-b4e4-4e2f-8eb3-05f2c66c4328" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd;K8S_POD_NAME=installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=cc1585d2830b3cdb5aad7602dc0c748d9bd5545fa21fc6e6d34fd0368d8115c4;K8S_POD_UID=233f6653-f182-444e-89d6-e459ff59b900" Path:"" ERRORED: error configuring pod [openshift-etcd/installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-etcd/installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2/233f6653-f182-444e-89d6-e459ff59b900]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-7 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-sfvgs | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-5sbbh | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-7 -n openshift-kube-apiserver because it was missing | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-98wpj | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-7 -n openshift-kube-apiserver because it was missing | |
default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Synced |
Node synced successfully | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-ztlz7 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-vxgq8 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-7 -n openshift-kube-apiserver because it was missing | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-nbxw4 | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-p46lt | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-5f8w4 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-ts5rk | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 7 triggered by "required configmap/config has changed" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Started |
Started container machine-config-daemon | |
| (x5) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Created |
Created container kube-rbac-proxy-crio |
openshift-dns |
kubelet |
node-resolver-5f8w4 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-vxgq8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Started |
Started container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Created |
Created container machine-config-daemon | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-etcd_233f6653-f182-444e-89d6-e459ff59b900_0(af9dc049ce5616eaa1399d21db7bd38ae97c1cde071a5510d1fe7d8c852c5c6d): error adding pod openshift-etcd_installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af9dc049ce5616eaa1399d21db7bd38ae97c1cde071a5510d1fe7d8c852c5c6d" Netns:"/var/run/netns/66947bd4-48f9-42c1-99ba-1e7a2d3c0d90" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd;K8S_POD_NAME=installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=af9dc049ce5616eaa1399d21db7bd38ae97c1cde071a5510d1fe7d8c852c5c6d;K8S_POD_UID=233f6653-f182-444e-89d6-e459ff59b900" Path:"" ERRORED: error configuring pod [openshift-etcd/installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-etcd/installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2/233f6653-f182-444e-89d6-e459ff59b900]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x5) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-etcd |
multus |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_961dadf4-e540-48f3-a338-b7e5b6e2fa1c_0(649d402f2e64f1e090415ddcd87f0707df0864af8a715f8eafc5e0c3bae81b94): error adding pod openshift-kube-controller-manager_installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"649d402f2e64f1e090415ddcd87f0707df0864af8a715f8eafc5e0c3bae81b94" Netns:"/var/run/netns/c24b3211-93e7-43ea-a29a-8f91b09aa5ef" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=649d402f2e64f1e090415ddcd87f0707df0864af8a715f8eafc5e0c3bae81b94;K8S_POD_UID=961dadf4-e540-48f3-a338-b7e5b6e2fa1c" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/961dadf4-e540-48f3-a338-b7e5b6e2fa1c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x5) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]shutdown ok readyz check failed | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
| (x3) | openshift-kube-controller-manager |
multus |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Failed |
Error: ErrImagePull | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Failed |
Error: ImagePullBackOff | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.20:8443/readyz": context deadline exceeded | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
ProbeError |
Readiness probe error: Get "https://10.128.0.20:8443/readyz": context deadline exceeded body: | |
| (x3) | openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-ztlz7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-kxckc" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-sfvgs |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Failed |
Error: ErrImagePull | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 35.43s (35.43s including waiting). Image size: 536898687 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Failed |
Error: ImagePullBackOff | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.29:8443/readyz": context deadline exceeded | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Failed |
Error: ImagePullBackOff | |
openshift-dns |
kubelet |
node-resolver-5f8w4 |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-5f8w4 |
Created |
Created container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-5f8w4 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 36.887s (36.887s including waiting). Image size: 563905988 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Failed |
Error: ErrImagePull | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-multus |
kubelet |
multus-vxgq8 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 36.836s (36.836s including waiting). Image size: 1209582329 bytes. | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
ProbeError |
Readiness probe error: Get "https://10.129.0.29:8443/readyz": context deadline exceeded body: | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Started |
Started container csi-driver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Created |
Created container csi-driver | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-nbxw4 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" in 1.62s (1.62s including waiting). Image size: 680556885 bytes. | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Started |
Started container gcp-pd-csi-driver-operator |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:11cfe62fc07450292261dab71b3eb1ef1fc615a24e05c282044403264b567db6" already present on machine | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Started |
Started container csi-node-driver-registrar | |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2 |
Created |
Created container gcp-pd-csi-driver-operator |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 2.013s (2.013s including waiting). Image size: 396191352 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-5sbbh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" in 1.629s (1.629s including waiting). Image size: 396574211 bytes. | |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-7b558f58f9-nfmbb |
Created |
Created container authentication-operator |
| (x3) | openshift-authentication-operator |
kubelet |
authentication-operator-7b558f58f9-nfmbb |
Started |
Started container authentication-operator |
| (x2) | openshift-authentication-operator |
kubelet |
authentication-operator-7b558f58f9-nfmbb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:432254b8dc65f17472fe6f9bd5a7cde177658799ebb05baede8f91ee2cd62472" already present on machine |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.24:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
ProbeError |
Readiness probe error: Get "https://10.129.0.40:8443/readyz": context deadline exceeded body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.40:8443/readyz": context deadline exceeded | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 8: rpc error: code = DeadlineExceeded desc = context deadline exceeded | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 2.586s (2.586s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container kubecfg-setup | |
| (x2) | openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Created |
Created container kube-rbac-proxy-node | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 1.84s (1.84s including waiting). Image size: 571426836 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 7: rpc error: code = DeadlineExceeded desc = context deadline exceeded | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
| (x5) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-n6t8j |
ProbeError |
Readiness probe error: Get "https://10.128.0.24:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 7.793s (7.793s including waiting). Image size: 691795442 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: failed to apply machine config controller manifests: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io machine-config-controller) | |
| (x5) | openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.20:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
| (x5) | openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
ProbeError |
Readiness probe error: Get "https://10.128.0.20:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 7 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2) | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 5.293s (5.293s including waiting). Image size: 389927221 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 5.546s (5.546s including waiting). Image size: 375717862 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_722725e8-bddc-4d52-ae26-b3873ca45bd9 became leader | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [+]shutdown ok readyz check failed | |
openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-2_9de4a77b-649e-485b-acb4-e4f9904ae344 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_9de4a77b-649e-485b-acb4-e4f9904ae344 became leader | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]shutdown ok readyz check failed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Started |
Started container routeoverride-cni | |
| (x10) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
secrets: localhost-recovery-client-token-8,service-account-private-key-8 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
default |
ovnkube-csr-approver-controller |
csr-7zqpm |
CSRApproved |
CSR "csr-7zqpm" has been approved | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-9 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 9 triggered by "secret \"service-account-private-key-8\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps/etcd-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Created |
Created container routeoverride-cni | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" | |
| (x16) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdateFailed |
Failed to update Secret/service-account-private-key -n openshift-kube-controller-manager: Operation cannot be fulfilled on secrets "service-account-private-key": the object has been modified; please apply your changes to the latest version and try again |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-9 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-9 -n openshift-kube-controller-manager because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-identity-webhook because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
ServiceAccountCreated |
Created ServiceAccount/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/pod-identity-webhook because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28829580 | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
DeploymentCreated |
Created Deployment.apps/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-9 -n openshift-kube-controller-manager because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
MutatingWebhookConfigurationCreated |
Created MutatingWebhookConfiguration.admissionregistration.k8s.io/pod-identity-webhook because it was missing | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
ServiceCreated |
Created Service/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.67/23] from ovn-kubernetes | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829595 | |
openshift-kube-controller-manager |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container installer | |
openshift-multus |
kubelet |
multus-hb5v6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-zh6rm | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-4mbtw | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-cloud-credential-operator |
replicaset-controller |
pod-identity-webhook-679666b9 |
SuccessfulCreate |
Created pod: pod-identity-webhook-679666b9-lfgzj | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-hb5v6 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-bmskb | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-k75xr | |
openshift-cloud-credential-operator |
replicaset-controller |
pod-identity-webhook-679666b9 |
SuccessfulCreate |
Created pod: pod-identity-webhook-679666b9-xh4qj | |
| (x2) | openshift-cloud-credential-operator |
controllermanager |
pod-identity-webhook |
NoPods |
No matching pods found |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829595 |
SuccessfulCreate |
Created pod: collect-profiles-28829595-6kbmk | |
openshift-cloud-credential-operator |
deployment-controller |
pod-identity-webhook |
ScalingReplicaSet |
Scaled up replica set pod-identity-webhook-679666b9 to 2 | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-2dhfp | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-kzgnx | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-8lq4q | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-9 -n openshift-kube-controller-manager because it was missing | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-pfhnt | |
openshift-cloud-credential-operator |
cloud-credential-operator |
cloud-credential-operator |
MutatingWebhookConfigurationUpdated |
Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/pod-identity-webhook because it changed | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-9 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Created |
Created container kube-rbac-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0_openshift-kube-apiserver_117d1894-2ff4-4a62-9883-2b56d7ab9572_0(fef545c1df6aa510c484e8ec85db4cea6d554663f93fb0d638458796c3a60839): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fef545c1df6aa510c484e8ec85db4cea6d554663f93fb0d638458796c3a60839" Netns:"/var/run/netns/b2cef90c-33bb-4d72-a4d5-87bce2089c77" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0;K8S_POD_INFRA_CONTAINER_ID=fef545c1df6aa510c484e8ec85db4cea6d554663f93fb0d638458796c3a60839;K8S_POD_UID=117d1894-2ff4-4a62-9883-2b56d7ab9572" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0/117d1894-2ff4-4a62-9883-2b56d7ab9572]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 in out of cluster comm: pod "revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : failed to sync secret cache: timed out waiting for the condition | |
| (x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:12:20.289363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:12:30.289105 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:12:40.289375 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:12:50.290106 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:13:00.289783 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:13:00.291016 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:13:00.291101 1 cmd.go:105] timed out waiting for the condition |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-9 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
openshift-machine-config-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-apiserver |
replicaset-controller |
apiserver-77d45ddc66 |
SuccessfulDelete |
Deleted pod: apiserver-77d45ddc66-sw2kd | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-9 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-77d45ddc66-sw2kd |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-6d7dbc56c5 |
SuccessfulCreate |
Created pod: apiserver-6d7dbc56c5-l698n | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d7dbc56c5 to 2 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-77d45ddc66 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-9 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-tqb4j | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-d5jsz | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-h4skh | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-2r78s | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-qfgpz | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0_openshift-kube-apiserver_117d1894-2ff4-4a62-9883-2b56d7ab9572_0(4ad12fba5b3160201ebaf06297411e8d350c8f5df46b63f7261fdc12760e9c86): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4ad12fba5b3160201ebaf06297411e8d350c8f5df46b63f7261fdc12760e9c86" Netns:"/var/run/netns/d484d433-b78e-4957-9a91-3f1290927ab2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0;K8S_POD_INFRA_CONTAINER_ID=4ad12fba5b3160201ebaf06297411e8d350c8f5df46b63f7261fdc12760e9c86;K8S_POD_UID=117d1894-2ff4-4a62-9883-2b56d7ab9572" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0/117d1894-2ff4-4a62-9883-2b56d7ab9572]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 in out of cluster comm: pod "revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-identity-webhook-679666b9-xh4qj_openshift-cloud-credential-operator_a8378eee-6540-4794-a1e5-7ad82217a21c_0(77df8bd5ae81916196374ee92dd1e7ced54f0aa5ab50db4d7eb5acf82ddfc375): error adding pod openshift-cloud-credential-operator_pod-identity-webhook-679666b9-xh4qj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"77df8bd5ae81916196374ee92dd1e7ced54f0aa5ab50db4d7eb5acf82ddfc375" Netns:"/var/run/netns/526e1a49-591e-43ab-8a20-cb1d5c2de1a6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=pod-identity-webhook-679666b9-xh4qj;K8S_POD_INFRA_CONTAINER_ID=77df8bd5ae81916196374ee92dd1e7ced54f0aa5ab50db4d7eb5acf82ddfc375;K8S_POD_UID=a8378eee-6540-4794-a1e5-7ad82217a21c" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-xh4qj] networking: Multus: [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-xh4qj/a8378eee-6540-4794-a1e5-7ad82217a21c]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod pod-identity-webhook-679666b9-xh4qj in out of cluster comm: pod "pod-identity-webhook-679666b9-xh4qj" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-69dkf | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-vqt97 | |
openshift-cluster-csi-drivers |
daemonset-controller |
gcp-pd-csi-driver-node |
SuccessfulCreate |
Created pod: gcp-pd-csi-driver-node-j94ng | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-9nnhr | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-9 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-9 -n openshift-kube-controller-manager because it was missing | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:73be5ac2f0ac40f28ee7f3e9e2c72c9be7bd72d86150b4f94115af21f40122aa" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 9 triggered by "secret \"service-account-private-key-8\" not found" | |
| (x41) | default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 9 triggered by "required secret/service-account-private-key has changed,required secret/localhost-recovery-client-token has changed" | |
default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
Synced |
Node synced successfully | |
| (x2) | openshift-cloud-credential-operator |
multus |
pod-identity-webhook-679666b9-xh4qj |
AddedInterface |
Add eth0 [10.129.0.42/23] from ovn-kubernetes |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Started |
Started container whereabouts-cni-bincopy | |
| (x2) | default |
cloud-node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Synced |
Node synced successfully |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Created |
Created container whereabouts-cni-bincopy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 9: configmap "revision-status-9" not found | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-98wpj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" in 11.622s (11.622s including waiting). Image size: 580821249 bytes. | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
Started |
Started container pod-identity-webhook | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:73be5ac2f0ac40f28ee7f3e9e2c72c9be7bd72d86150b4f94115af21f40122aa" in 3.618s (3.618s including waiting). Image size: 423225411 bytes. | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-xh4qj |
Created |
Created container pod-identity-webhook | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x13) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: cluster-policy-controller-config-9,config-9,controller-manager-kubeconfig-9,kube-controller-cert-syncer-kubeconfig-9,kube-controller-manager-pod-9,recycler-config-9,service-ca-9,serviceaccount-ca-9 |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
multus |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.43/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-59826e19ffd81ce395b52f6b2b19b336 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x, currentConfig rendered-worker-59826e19ffd81ce395b52f6b2b19b336 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
Uncordon |
Update completed for config rendered-worker-59826e19ffd81ce395b52f6b2b19b336 and node has been uncordoned | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
| (x3) | openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.41/23] from ovn-kubernetes |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
| (x9) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x4) | openshift-ingress |
service-controller |
router-default |
UpdatedLoadBalancer |
Updated load balancer with new hosts |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Failed |
Error: ErrImagePull | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Failed |
Error: ErrImagePull | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-identity-webhook-679666b9-lfgzj_openshift-cloud-credential-operator_5afa0922-af54-418e-9ab8-db641c23d4d8_0(1f867f0dc44879be751ed55ff6eb0abd72357bb19567c527cf9e9d2041cc4be1): error adding pod openshift-cloud-credential-operator_pod-identity-webhook-679666b9-lfgzj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1f867f0dc44879be751ed55ff6eb0abd72357bb19567c527cf9e9d2041cc4be1" Netns:"/var/run/netns/a6f2de42-61a1-41f4-827c-3f989b42ea85" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=pod-identity-webhook-679666b9-lfgzj;K8S_POD_INFRA_CONTAINER_ID=1f867f0dc44879be751ed55ff6eb0abd72357bb19567c527cf9e9d2041cc4be1;K8S_POD_UID=5afa0922-af54-418e-9ab8-db641c23d4d8" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj] networking: Multus: [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj/5afa0922-af54-418e-9ab8-db641c23d4d8]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod pod-identity-webhook-679666b9-lfgzj in out of cluster comm: pod "pod-identity-webhook-679666b9-lfgzj" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" | |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9_0(dc068a6813abfd5b24e991c30fed56f8c49a0a2242bb3b9267d2abdc356bba3f): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc068a6813abfd5b24e991c30fed56f8c49a0a2242bb3b9267d2abdc356bba3f" Netns:"/var/run/netns/484dbed8-53bf-456b-aa03-28068dd9c44b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=dc068a6813abfd5b24e991c30fed56f8c49a0a2242bb3b9267d2abdc356bba3f;K8S_POD_UID=3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_2fe034c5-4f06-435e-9971-f79cfa511446_0(dba096df9bfbba65da6bef569a9c1c175ac340e0c64e1c6b7dfed2e644993d4e): error adding pod openshift-kube-controller-manager_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dba096df9bfbba65da6bef569a9c1c175ac340e0c64e1c6b7dfed2e644993d4e" Netns:"/var/run/netns/72cac890-ec60-4856-9564-ad4c6924238b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=dba096df9bfbba65da6bef569a9c1c175ac340e0c64e1c6b7dfed2e644993d4e;K8S_POD_UID=2fe034c5-4f06-435e-9971-f79cfa511446" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2/2fe034c5-4f06-435e-9971-f79cfa511446]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Failed |
Error: ImagePullBackOff | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8bdbc6bbb-txb89_openshift-oauth-apiserver_ed5bd724-916b-4099-95df-95331bbf04f1_0(6565d83dd703f346c3cda17b61bf9ce7565f46c28a846afcf0579e477711c131): error adding pod openshift-oauth-apiserver_apiserver-8bdbc6bbb-txb89 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6565d83dd703f346c3cda17b61bf9ce7565f46c28a846afcf0579e477711c131" Netns:"/var/run/netns/09d521ea-abf5-4e8d-861c-9454f88b79c7" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-8bdbc6bbb-txb89;K8S_POD_INFRA_CONTAINER_ID=6565d83dd703f346c3cda17b61bf9ce7565f46c28a846afcf0579e477711c131;K8S_POD_UID=ed5bd724-916b-4099-95df-95331bbf04f1" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89] networking: Multus: [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89/ed5bd724-916b-4099-95df-95331bbf04f1]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod apiserver-8bdbc6bbb-txb89 in out of cluster comm: pod "apiserver-8bdbc6bbb-txb89" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-dns |
kubelet |
node-resolver-4mbtw |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Failed |
Error: ImagePullBackOff | |
| (x10) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container etcd | |
openshift-etcd |
static-pod-installer |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 12 | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-bmskb |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x7) | openshift-network-diagnostics |
kubelet |
network-check-target-zh6rm |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-294p6" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Failed |
Error: ErrImagePull | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 37.408s (37.408s including waiting). Image size: 536898687 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Failed |
Error: ImagePullBackOff | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Created |
Created container csi-driver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Started |
Started container csi-driver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Failed |
Error: ErrImagePull | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Failed |
Error: ImagePullBackOff | |
openshift-multus |
kubelet |
multus-hb5v6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 38.071s (38.071s including waiting). Image size: 1209582329 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-kzgnx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" in 2.71s (2.71s including waiting). Image size: 680556885 bytes. | |
| (x2) | openshift-dns |
kubelet |
node-resolver-4mbtw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 3.768s (3.768s including waiting). Image size: 396191352 bytes. | |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 1.833s (1.833s including waiting). Image size: 563905988 bytes. | |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Created |
Created container dns-node-resolver | |
openshift-dns |
kubelet |
node-resolver-4mbtw |
Started |
Started container dns-node-resolver | |
| (x13) | openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [-]etcd-readiness failed: reason withheld [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]shutdown ok readyz check failed |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" in 4.245s (4.245s including waiting). Image size: 396574211 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-2dhfp |
Started |
Started container csi-liveness-probe | |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" |
| (x7) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
ProbeError |
Readiness probe error: Get "https://10.129.0.29:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
| (x7) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.29:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1_openshift-kube-controller-manager(b61f4e787fe92aaaf66a5d43d6a9f32a) |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 2.878s (2.878s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
| (x2) | openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container northd | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-cert-syncer |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container ovn-controller | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-pfhnt |
Started |
Started container ovn-acl-logging | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-cert-syncer |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 1.215s (1.215s including waiting). Image size: 571426836 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
| (x4) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" already present on machine |
| (x7) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.40:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
ProbeError |
Readiness probe error: Get "https://10.130.0.54:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 7.648s (7.648s including waiting). Image size: 691795442 bytes. | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-dcf867d89-zrhwj |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.54:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Created |
Created container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 5.161s (5.161s including waiting). Image size: 389927221 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
| (x8) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
ProbeError |
Readiness probe error: Get "https://10.129.0.40:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-p46lt |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Created |
Created container approver |
| (x2) | openshift-network-node-identity |
kubelet |
network-node-identity-gjpmc |
Started |
Started container approver |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 6.979s (6.979s including waiting). Image size: 375717862 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Started |
Started container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" | |
| (x7) | openshift-multus |
kubelet |
network-metrics-daemon-d5jsz |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
| (x18) | openshift-multus |
kubelet |
network-metrics-daemon-d5jsz |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-network-diagnostics |
kubelet |
network-check-target-vqt97 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-dbkn2" : [failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded, object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:785b30a0f466a3a4581f68f36dd9de13a5700b48fe6b0e32f01541d4ae611010" in 9.597s (9.597s including waiting). Image size: 580821249 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-k75xr |
Started |
Started container whereabouts-cni-bincopy | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8bdbc6bbb-txb89_openshift-oauth-apiserver_ed5bd724-916b-4099-95df-95331bbf04f1_0(baad3d10937e5e85704661fe1bfa5497cf85ba8d29d4e5d3d0d20d9ed59497b3): error adding pod openshift-oauth-apiserver_apiserver-8bdbc6bbb-txb89 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"baad3d10937e5e85704661fe1bfa5497cf85ba8d29d4e5d3d0d20d9ed59497b3" Netns:"/var/run/netns/e964db3d-4da2-40ae-8a51-8eb52f01390f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-8bdbc6bbb-txb89;K8S_POD_INFRA_CONTAINER_ID=baad3d10937e5e85704661fe1bfa5497cf85ba8d29d4e5d3d0d20d9ed59497b3;K8S_POD_UID=ed5bd724-916b-4099-95df-95331bbf04f1" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89] networking: Multus: [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89/ed5bd724-916b-4099-95df-95331bbf04f1]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8bdbc6bbb-txb89?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9_0(8979ede010dc37a6efba7a12a4b588ef60fa36afdb3f67c722011d37f9916cb4): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8979ede010dc37a6efba7a12a4b588ef60fa36afdb3f67c722011d37f9916cb4" Netns:"/var/run/netns/a44a55b2-80e4-4c7d-96c7-33264a4809bc" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=8979ede010dc37a6efba7a12a4b588ef60fa36afdb3f67c722011d37f9916cb4;K8S_POD_UID=3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-identity-webhook-679666b9-lfgzj_openshift-cloud-credential-operator_5afa0922-af54-418e-9ab8-db641c23d4d8_0(ca15d8b2d88eca5cf65f36c8eadc70db9d6b8887064df8ac8799320047601bd5): error adding pod openshift-cloud-credential-operator_pod-identity-webhook-679666b9-lfgzj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ca15d8b2d88eca5cf65f36c8eadc70db9d6b8887064df8ac8799320047601bd5" Netns:"/var/run/netns/c7c723c1-3c86-4898-a826-e1074c4f65c4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=pod-identity-webhook-679666b9-lfgzj;K8S_POD_INFRA_CONTAINER_ID=ca15d8b2d88eca5cf65f36c8eadc70db9d6b8887064df8ac8799320047601bd5;K8S_POD_UID=5afa0922-af54-418e-9ab8-db641c23d4d8" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj] networking: Multus: [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj/5afa0922-af54-418e-9ab8-db641c23d4d8]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/pod-identity-webhook-679666b9-lfgzj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_2fe034c5-4f06-435e-9971-f79cfa511446_0(3595dd4bb62f59bffcbd127988f48bddabf935d4c71d26efa14b054247ce0cc5): error adding pod openshift-kube-controller-manager_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3595dd4bb62f59bffcbd127988f48bddabf935d4c71d26efa14b054247ce0cc5" Netns:"/var/run/netns/26da9772-3aac-4c97-a4ff-d17c70073697" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=3595dd4bb62f59bffcbd127988f48bddabf935d4c71d26efa14b054247ce0cc5;K8S_POD_UID=2fe034c5-4f06-435e-9971-f79cfa511446" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2/2fe034c5-4f06-435e-9971-f79cfa511446]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-2wjrj_openshift-marketplace_78f0eb9c-2de8-4051-9d15-31d970b476ca_0(a9d2b078b550291807221c319ecc8a2ad7209182593110bb0417cedfbb50a54e): error adding pod openshift-marketplace_community-operators-2wjrj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a9d2b078b550291807221c319ecc8a2ad7209182593110bb0417cedfbb50a54e" Netns:"/var/run/netns/f0e88afc-0116-4853-860f-ecdd84f950e8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-2wjrj;K8S_POD_INFRA_CONTAINER_ID=a9d2b078b550291807221c319ecc8a2ad7209182593110bb0417cedfbb50a54e;K8S_POD_UID=78f0eb9c-2de8-4051-9d15-31d970b476ca" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-2wjrj] networking: Multus: [openshift-marketplace/community-operators-2wjrj/78f0eb9c-2de8-4051-9d15-31d970b476ca]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2wjrj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-nln6m_openshift-marketplace_5439da16-223d-4d86-82da-06e1c02a4b55_0(7b85b798e6a80f2e54dd4764dc1a597b996ba9c0e937f4a3c8ea1b499d8271dc): error adding pod openshift-marketplace_certified-operators-nln6m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7b85b798e6a80f2e54dd4764dc1a597b996ba9c0e937f4a3c8ea1b499d8271dc" Netns:"/var/run/netns/ff62bd64-bea1-49c9-b159-e5b26b240d9b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-nln6m;K8S_POD_INFRA_CONTAINER_ID=7b85b798e6a80f2e54dd4764dc1a597b996ba9c0e937f4a3c8ea1b499d8271dc;K8S_POD_UID=5439da16-223d-4d86-82da-06e1c02a4b55" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-nln6m] networking: Multus: [openshift-marketplace/certified-operators-nln6m/5439da16-223d-4d86-82da-06e1c02a4b55]: error waiting for pod: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nln6m?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x24) | openshift-network-diagnostics |
kubelet |
network-check-target-vqt97 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2a18913ba70068e4f34dfd93992d0c71efbbf175fe66db130ad5880f9bb3b144" already present on machine | |
| (x2) | openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Created |
Created container machine-controller |
| (x2) | openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Started |
Started container machine-controller |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8bdbc6bbb-txb89_openshift-oauth-apiserver_ed5bd724-916b-4099-95df-95331bbf04f1_0(10714aab117069fb971c849b1ced6b0d3dd3f0d324a18be463f8a6b91c66d482): error adding pod openshift-oauth-apiserver_apiserver-8bdbc6bbb-txb89 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"10714aab117069fb971c849b1ced6b0d3dd3f0d324a18be463f8a6b91c66d482" Netns:"/var/run/netns/4d27f488-57da-4192-8ace-c641c4d3eaba" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-8bdbc6bbb-txb89;K8S_POD_INFRA_CONTAINER_ID=10714aab117069fb971c849b1ced6b0d3dd3f0d324a18be463f8a6b91c66d482;K8S_POD_UID=ed5bd724-916b-4099-95df-95331bbf04f1" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89] networking: Multus: [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89/ed5bd724-916b-4099-95df-95331bbf04f1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod apiserver-8bdbc6bbb-txb89 in out of cluster comm: SetNetworkStatus: failed to update the pod apiserver-8bdbc6bbb-txb89 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8bdbc6bbb-txb89?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9_0(20b23e5282d0629fec8fb2428d9529c5ffdf9f939b7d0945870fcdf3f05f3913): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"20b23e5282d0629fec8fb2428d9529c5ffdf9f939b7d0945870fcdf3f05f3913" Netns:"/var/run/netns/045aec6a-d6fc-406c-89c5-1b6dd279c5be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=20b23e5282d0629fec8fb2428d9529c5ffdf9f939b7d0945870fcdf3f05f3913;K8S_POD_UID=3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: SetNetworkStatus: failed to update the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_2fe034c5-4f06-435e-9971-f79cfa511446_0(b083319ffe835b5b77c898af612c03699d7fedc5c4fd0813050a02470eb78e3a): error adding pod openshift-kube-controller-manager_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b083319ffe835b5b77c898af612c03699d7fedc5c4fd0813050a02470eb78e3a" Netns:"/var/run/netns/1a690615-a310-4034-bb19-16309812a018" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=b083319ffe835b5b77c898af612c03699d7fedc5c4fd0813050a02470eb78e3a;K8S_POD_UID=2fe034c5-4f06-435e-9971-f79cfa511446" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2/2fe034c5-4f06-435e-9971-f79cfa511446]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-nln6m_openshift-marketplace_5439da16-223d-4d86-82da-06e1c02a4b55_0(f53b08bf7392755a9e2e62de3480c7122ea6788ccd777acdcab65e9def27e182): error adding pod openshift-marketplace_certified-operators-nln6m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f53b08bf7392755a9e2e62de3480c7122ea6788ccd777acdcab65e9def27e182" Netns:"/var/run/netns/afbe3e08-6e8e-461e-a287-d823dc73904f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-nln6m;K8S_POD_INFRA_CONTAINER_ID=f53b08bf7392755a9e2e62de3480c7122ea6788ccd777acdcab65e9def27e182;K8S_POD_UID=5439da16-223d-4d86-82da-06e1c02a4b55" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-nln6m] networking: Multus: [openshift-marketplace/certified-operators-nln6m/5439da16-223d-4d86-82da-06e1c02a4b55]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-nln6m in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-nln6m in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nln6m?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-identity-webhook-679666b9-lfgzj_openshift-cloud-credential-operator_5afa0922-af54-418e-9ab8-db641c23d4d8_0(62822439b5492c6ba450c066418deba68f7a6660fb31d412dd102b8f341d4e1c): error adding pod openshift-cloud-credential-operator_pod-identity-webhook-679666b9-lfgzj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"62822439b5492c6ba450c066418deba68f7a6660fb31d412dd102b8f341d4e1c" Netns:"/var/run/netns/2bd61d02-839b-42ad-81b4-0b8c36b24e9e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=pod-identity-webhook-679666b9-lfgzj;K8S_POD_INFRA_CONTAINER_ID=62822439b5492c6ba450c066418deba68f7a6660fb31d412dd102b8f341d4e1c;K8S_POD_UID=5afa0922-af54-418e-9ab8-db641c23d4d8" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj] networking: Multus: [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj/5afa0922-af54-418e-9ab8-db641c23d4d8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod pod-identity-webhook-679666b9-lfgzj in out of cluster comm: SetNetworkStatus: failed to update the pod pod-identity-webhook-679666b9-lfgzj in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/pod-identity-webhook-679666b9-lfgzj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-2wjrj_openshift-marketplace_78f0eb9c-2de8-4051-9d15-31d970b476ca_0(c9f8abfcb327e6b4fbab4b28661df84c4ad69d6e90f6e7dddda95aeadb62a75b): error adding pod openshift-marketplace_community-operators-2wjrj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c9f8abfcb327e6b4fbab4b28661df84c4ad69d6e90f6e7dddda95aeadb62a75b" Netns:"/var/run/netns/0b99fa28-69ba-4093-999d-455a949df9f9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-2wjrj;K8S_POD_INFRA_CONTAINER_ID=c9f8abfcb327e6b4fbab4b28661df84c4ad69d6e90f6e7dddda95aeadb62a75b;K8S_POD_UID=78f0eb9c-2de8-4051-9d15-31d970b476ca" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-2wjrj] networking: Multus: [openshift-marketplace/community-operators-2wjrj/78f0eb9c-2de8-4051-9d15-31d970b476ca]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-2wjrj in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-2wjrj in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2wjrj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Created |
Created container csi-provisioner |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Created |
Created container config-sync-controllers |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:96b42bbafd9e46a37021c4ed3a565dce80f369546a57a6573bcf89a827d0366f" already present on machine |
| (x3) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-8zwgp |
Started |
Started container csi-provisioner |
| (x5) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:43c131e0ab4daf9b297d84bda92ba78bd5df8af483ad8e96e10d05d37cd4a08a" already present on machine |
| (x2) | openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-f546c9d4b-z6bjn |
Started |
Started container config-sync-controllers |
| (x2) | openshift-machine-api |
kubelet |
machine-api-controllers-7785d897-m4jlj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" already present on machine |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-7bf6f695bf-4rjcs |
Created |
Created container service-ca-operator |
| (x3) | openshift-service-ca-operator |
kubelet |
service-ca-operator-7bf6f695bf-4rjcs |
Started |
Started container service-ca-operator |
| (x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-7bf6f695bf-4rjcs |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" already present on machine |
| (x2) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Created |
Created container csi-resizer |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-controller-745666687f-b5rxc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:51d084be99ab25e6f1ce93612798a543842d3ac1c0644abd8a69e495e91be5fa" already present on machine | |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Created |
Created container cluster-image-registry-operator |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine | |
| (x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6846798df4-kwxvp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:252cff0c140b9f16ee1902dedf2316ac40cb5b8bdb04bc8b75c84bf44daeda02" already present on machine |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f4977839a900eff18097ecd23a5143963ab4d9fd255383f2734eea3c1de97343" already present on machine | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
BackOff |
Back-off restarting failed container cluster-node-tuning-operator in pod cluster-node-tuning-operator-5b66777f7c-9pqmc_openshift-cluster-node-tuning-operator(5bacb661-a5ed-4083-8539-df289aaabb11) | |
| (x2) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c0634bf2f0bb787b769eac28c0323ae2558b07adf3b851b5a46ed0c968909a2d" already present on machine |
| (x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9b6f4ef915a76216ca40d3a5438c63d70e4053019a1b91e4af06ad224ec3a9fe" already present on machine |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container cluster-policy-controller |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3596fda1c9ca4be9b64d082aa96220e4da44af1b4c6c7f6ef57ea2b8a88ce6ef" already present on machine | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
BackOff |
Back-off restarting failed container csi-snapshot-controller-operator in pod csi-snapshot-controller-operator-9bd7f8667-lfs5z_openshift-cluster-storage-operator(86f2ee3e-b83a-459d-80fe-59a5dbaa9356) | |
| (x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7c885b8899-z89zf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Started |
Started container machine-config-controller |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Created |
Created container package-server-manager |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5736327df975ce44c3d66bbea2de7ed5f172f208a31de242272c919099785f4f" already present on machine | |
| (x2) | openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-f7554d4b7-xd4h9 |
Started |
Started container package-server-manager |
| (x2) | openshift-image-registry |
kubelet |
cluster-image-registry-operator-7c8c54f569-rsqg2 |
Started |
Started container cluster-image-registry-operator |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller |
openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:6f9adb6ccf0dfed45237d3a5459f03a073c02460df59949738526c9b841d4487" already present on machine | |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6846798df4-kwxvp |
Started |
Started container openshift-apiserver-operator |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Started |
Started container cluster-baremetal-operator |
| (x2) | openshift-machine-api |
kubelet |
cluster-baremetal-operator-7648bf4f7c-nml8w |
Created |
Created container cluster-baremetal-operator |
| (x3) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-6846798df4-kwxvp |
Created |
Created container openshift-apiserver-operator |
| (x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Started |
Started container machine-config-operator |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-operator-55d6dfd54f-k2phh |
Created |
Created container machine-config-operator |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-controller-54475c996-znc5k |
Created |
Created container machine-config-controller |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Created |
Created container kube-storage-version-migrator-operator |
| (x3) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-86c7d8d555-x49bl |
Started |
Started container kube-storage-version-migrator-operator |
openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c35448d60ca764dcef6dd31749ae6752939edd7851ab9e6db1a9a27c2bc33839" already present on machine | |
| (x2) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-4lmsc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Created |
Created container openshift-config-operator |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:55f75414549c1b6b93e2581c632d2c18c5ac28d543dff91360add584e744db45" already present on machine | |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-4lmsc |
Created |
Created container controller-manager |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Started |
Started container openshift-config-operator |
| (x2) | openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-4lmsc |
Started |
Started container controller-manager |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Created |
Created container cluster-autoscaler-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ce407ee69f5f30ad5abc97ecf508b395e999f09526adcf4fe5c16b43c52b4141" already present on machine |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7c885b8899-z89zf |
Created |
Created container kube-controller-manager-operator |
| (x3) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-7c885b8899-z89zf |
Started |
Started container kube-controller-manager-operator |
| (x2) | openshift-machine-api |
kubelet |
cluster-autoscaler-operator-776f9d4bf4-dthxh |
Started |
Started container cluster-autoscaler-operator |
| (x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Started |
Started container machine-api-operator |
| (x2) | openshift-machine-api |
kubelet |
machine-api-operator-c6cf9575f-k7jtl |
Created |
Created container machine-api-operator |
openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:009f83432ae94b0a725b4041638740872a214e0b95db4361d8bfa8a73c13aae0" already present on machine | |
| (x2) | openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Created |
Created container controller |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Created |
Created container kube-apiserver-operator |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Created |
Created container control-plane-machine-set-operator |
| (x2) | openshift-cloud-network-config-controller |
kubelet |
cloud-network-config-controller-7699df78d5-mx8n9 |
Started |
Started container controller |
| (x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-7667c744f7-8tlf7 |
Started |
Started container control-plane-machine-set-operator |
| (x3) | openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-749f4b99b7-fqnd2 |
Started |
Started container kube-apiserver-operator |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Started |
Started container cluster-storage-operator |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Created |
Created container kube-scheduler-operator-container |
| (x3) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-7b64b578df-w9z5s |
Started |
Started container kube-scheduler-operator-container |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Created |
Created container machine-approver-controller |
| (x2) | openshift-cluster-machine-approver |
kubelet |
machine-approver-5697c6f6dd-kpg6d |
Started |
Started container machine-approver-controller |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Created |
Created container etcd-operator |
| (x2) | openshift-service-ca |
kubelet |
service-ca-7949b5fbb4-gsbvx |
Created |
Created container service-ca-controller |
| (x3) | openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Started |
Started container etcd-operator |
openshift-service-ca |
kubelet |
service-ca-7949b5fbb4-gsbvx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5c165af177cdd364042386f990d4f500b8aec6ad932d2d09904e2d4c66a843bb" already present on machine | |
| (x2) | openshift-service-ca |
kubelet |
service-ca-7949b5fbb4-gsbvx |
Started |
Started container service-ca-controller |
| (x3) | openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-86f6b4f867-vvnvr |
Created |
Created container cluster-storage-operator |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
ProbeError |
Liveness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused body: |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.3:10259/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
| (x3) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Unhealthy |
Liveness probe failed: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-resources-copy | |
| (x4) | openshift-config-operator |
kubelet |
openshift-config-operator-85b957bbfc-dwcrh |
ProbeError |
Readiness probe error: Get "https://10.130.0.37:8443/healthz": dial tcp 10.130.0.37:8443: connect: connection refused body: |
openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
Unhealthy |
Liveness probe failed: Get "https://10.130.0.29:8443/healthz": dial tcp 10.130.0.29:8443: connect: connection refused | |
openshift-etcd-operator |
kubelet |
etcd-operator-7bbcf99d5c-9746p |
ProbeError |
Liveness probe error: Get "https://10.130.0.29:8443/healthz": dial tcp 10.130.0.29:8443: connect: connection refused body: | |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
BackOff |
Back-off restarting failed container openshift-controller-manager-operator in pod openshift-controller-manager-operator-786b85b959-zrm7s_openshift-controller-manager-operator(13d1f434-f745-4dfc-97ba-423e4bb23b7b) |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Started |
Started container csi-snapshot-controller-operator |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Created |
Created container csi-snapshot-controller-operator |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-9bd7f8667-lfs5z |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bce976095feb5321614ec8fda031d8c547cf3d990db7e5244ec28ca78dcbc642" already present on machine |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-5b66777f7c-9pqmc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" already present on machine |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
BackOff |
Back-off restarting failed container snapshot-controller in pod csi-snapshot-controller-55594bbb64-w77tp_openshift-cluster-storage-operator(497a1da9-5a17-4267-8959-d0a9d1f3d761) | |
| (x4) | openshift-multus |
kubelet |
multus-2r78s |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-cnhrq" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-g4gbv" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-dns |
kubelet |
node-resolver-h4skh |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-nww2j" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-tjlrx" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-j7dtf" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-mvnkf" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x4) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wqrtv" : failed to fetch token: Timeout: request did not complete within requested timeout - context deadline exceeded |
| (x3) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:1b8e2349b542c1c7ace19af7d8b375557a8ab9df84e5858e85540714e1e55389" already present on machine |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
Started |
Started container openshift-controller-manager-operator |
| (x4) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-786b85b959-zrm7s |
Created |
Created container openshift-controller-manager-operator |
openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
BackOff |
Back-off restarting failed container network-operator in pod network-operator-69d4947f66-6pwvp_openshift-network-operator(4cf1cbab-9459-4759-880c-2daffa673113) | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
BackOff |
Back-off restarting failed container route-controller-manager in pod route-controller-manager-d8db88b9d-2pwz8_openshift-route-controller-manager(bc465bd4-ce45-461c-9da3-c6f6eff81d02) |
| (x2) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5a22e664cd05bf6f8a97d2f7b96ad5def60ce4c28d17c9d2d4ef0a14ed70714" already present on machine |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
Created |
Created container snapshot-controller |
| (x3) | openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-55594bbb64-w77tp |
Started |
Started container snapshot-controller |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-rev | |
| (x3) | openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" already present on machine |
| (x4) | openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
Started |
Started container network-operator |
| (x4) | openshift-network-operator |
kubelet |
network-operator-69d4947f66-6pwvp |
Created |
Created container network-operator |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8bdbc6bbb-txb89_openshift-oauth-apiserver_ed5bd724-916b-4099-95df-95331bbf04f1_0(357bdbe378cf433c9f67b6bfc683741de2a61eb0106c37759f8cb299781cc2c6): error adding pod openshift-oauth-apiserver_apiserver-8bdbc6bbb-txb89 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"357bdbe378cf433c9f67b6bfc683741de2a61eb0106c37759f8cb299781cc2c6" Netns:"/var/run/netns/026f4cf3-f9f6-4820-965a-d71ed257ae18" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-8bdbc6bbb-txb89;K8S_POD_INFRA_CONTAINER_ID=357bdbe378cf433c9f67b6bfc683741de2a61eb0106c37759f8cb299781cc2c6;K8S_POD_UID=ed5bd724-916b-4099-95df-95331bbf04f1" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89] networking: Multus: [openshift-oauth-apiserver/apiserver-8bdbc6bbb-txb89/ed5bd724-916b-4099-95df-95331bbf04f1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod apiserver-8bdbc6bbb-txb89 in out of cluster comm: SetNetworkStatus: failed to update the pod apiserver-8bdbc6bbb-txb89 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8bdbc6bbb-txb89?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9_0(6ee102711772900ee531c71f1e3249ecacf3c6d6fed4c3377776ed2e29ac7fdd): error adding pod openshift-kube-apiserver_revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6ee102711772900ee531c71f1e3249ecacf3c6d6fed4c3377776ed2e29ac7fdd" Netns:"/var/run/netns/97db6f64-b413-4e86-928a-4301a5d8f177" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=6ee102711772900ee531c71f1e3249ecacf3c6d6fed4c3377776ed2e29ac7fdd;K8S_POD_UID=3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2/3ebdcdd3-c53c-4a89-ae58-a184ee9c82c9]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: SetNetworkStatus: failed to update the pod revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Started |
Started container route-controller-manager |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Created |
Created container route-controller-manager |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod-identity-webhook-679666b9-lfgzj_openshift-cloud-credential-operator_5afa0922-af54-418e-9ab8-db641c23d4d8_0(f2d10249e7c2904ff44fcf88d58b74cd96c4a7365adb470f122c2c2b330f1a43): error adding pod openshift-cloud-credential-operator_pod-identity-webhook-679666b9-lfgzj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f2d10249e7c2904ff44fcf88d58b74cd96c4a7365adb470f122c2c2b330f1a43" Netns:"/var/run/netns/21758395-ff5e-4be8-bcd1-4ee75eb1e1fa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=pod-identity-webhook-679666b9-lfgzj;K8S_POD_INFRA_CONTAINER_ID=f2d10249e7c2904ff44fcf88d58b74cd96c4a7365adb470f122c2c2b330f1a43;K8S_POD_UID=5afa0922-af54-418e-9ab8-db641c23d4d8" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj] networking: Multus: [openshift-cloud-credential-operator/pod-identity-webhook-679666b9-lfgzj/5afa0922-af54-418e-9ab8-db641c23d4d8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod pod-identity-webhook-679666b9-lfgzj in out of cluster comm: SetNetworkStatus: failed to update the pod pod-identity-webhook-679666b9-lfgzj in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/pod-identity-webhook-679666b9-lfgzj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-nln6m_openshift-marketplace_5439da16-223d-4d86-82da-06e1c02a4b55_0(a73086fa445156059dba01ad12f9f3b5953c37fe2f43cf7f3d8b33eca37c4140): error adding pod openshift-marketplace_certified-operators-nln6m to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a73086fa445156059dba01ad12f9f3b5953c37fe2f43cf7f3d8b33eca37c4140" Netns:"/var/run/netns/d8d64b0f-f37d-4c35-91df-3555cd7aaf8c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=certified-operators-nln6m;K8S_POD_INFRA_CONTAINER_ID=a73086fa445156059dba01ad12f9f3b5953c37fe2f43cf7f3d8b33eca37c4140;K8S_POD_UID=5439da16-223d-4d86-82da-06e1c02a4b55" Path:"" ERRORED: error configuring pod [openshift-marketplace/certified-operators-nln6m] networking: Multus: [openshift-marketplace/certified-operators-nln6m/5439da16-223d-4d86-82da-06e1c02a4b55]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod certified-operators-nln6m in out of cluster comm: SetNetworkStatus: failed to update the pod certified-operators-nln6m in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-nln6m?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-controller-manager_2fe034c5-4f06-435e-9971-f79cfa511446_0(8fc34aa1764d94422684ad7c5dda93ce612a6451861871c69b769a2f83bc9391): error adding pod openshift-kube-controller-manager_installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"8fc34aa1764d94422684ad7c5dda93ce612a6451861871c69b769a2f83bc9391" Netns:"/var/run/netns/eaae1f03-7689-4ef4-b131-329547d4d502" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=8fc34aa1764d94422684ad7c5dda93ce612a6451861871c69b769a2f83bc9391;K8S_POD_UID=2fe034c5-4f06-435e-9971-f79cfa511446" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-controller-manager/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2/2fe034c5-4f06-435e-9971-f79cfa511446]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x4) | openshift-kube-apiserver |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_community-operators-2wjrj_openshift-marketplace_78f0eb9c-2de8-4051-9d15-31d970b476ca_0(6adfe2ef4ea5539d0fcdf3255628bb3e7be6c867367c4c2c4228985b44a78dc5): error adding pod openshift-marketplace_community-operators-2wjrj to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6adfe2ef4ea5539d0fcdf3255628bb3e7be6c867367c4c2c4228985b44a78dc5" Netns:"/var/run/netns/a813136f-c178-4b9a-8154-e26b306176d0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=community-operators-2wjrj;K8S_POD_INFRA_CONTAINER_ID=6adfe2ef4ea5539d0fcdf3255628bb3e7be6c867367c4c2c4228985b44a78dc5;K8S_POD_UID=78f0eb9c-2de8-4051-9d15-31d970b476ca" Path:"" ERRORED: error configuring pod [openshift-marketplace/community-operators-2wjrj] networking: Multus: [openshift-marketplace/community-operators-2wjrj/78f0eb9c-2de8-4051-9d15-31d970b476ca]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod community-operators-2wjrj in out of cluster comm: SetNetworkStatus: failed to update the pod community-operators-2wjrj in out of cluster comm: status update failed for pod /: Get "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-2wjrj?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.37:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openshift-marketplace |
multus |
certified-operators-nln6m |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes |
| (x4) | openshift-oauth-apiserver |
multus |
apiserver-8bdbc6bbb-txb89 |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
ProbeError |
Readiness probe error: Get "https://10.129.0.37:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
| (x4) | openshift-cloud-credential-operator |
multus |
pod-identity-webhook-679666b9-lfgzj |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes |
| (x3) | openshift-marketplace |
multus |
community-operators-2wjrj |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes |
| (x4) | openshift-kube-controller-manager |
multus |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
ProbeError |
Readiness probe error: Get "https://10.129.0.37:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.37:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-machine-config-operator |
machine-config-operator |
openshift-machine-config-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-csi-drivers |
external-snapshotter-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-1 |
external-snapshotter-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1 became leader | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Created |
Created container fix-audit-permissions | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Created |
Created container extract-utilities | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Started |
Started container fix-audit-permissions | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-1_55890177-8789-4855-a703-0fcc0cbabb12 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_55890177-8789-4855-a703-0fcc0cbabb12 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_2a36ada0-6d47-4b57-9eec-ec27b654101f became leader | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-7b558f58f9-nfmbb_790fced7-f548-4eda-a40c-3d15ebf13b6f became leader | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-cloud-controller-manager |
cloud-controller-manager-operator |
gcp-cloud-controller-manager |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-55594bbb64-w77tp |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-55594bbb64-w77tp became leader | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Created |
Created container extract-utilities | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:73be5ac2f0ac40f28ee7f3e9e2c72c9be7bd72d86150b4f94115af21f40122aa" | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Started |
Started container extract-utilities | |
openshift-cluster-csi-drivers |
external-attacher-leader-pd.csi.storage.gke.io/ci-op-2fcpj5j6-f6035-2lklf-master-1 |
external-attacher-leader-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1 became leader | |
openshift-cluster-csi-drivers |
external-resizer-pd-csi-storage-gke-io/ci-op-2fcpj5j6-f6035-2lklf-master-2 |
external-resizer-pd-csi-storage-gke-io |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2 became leader | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Started |
Started container oauth-apiserver | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (ovn-kubernetes-node-dockercfg-tj4f7); attempting to pull the image may not succeed. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (tuned-dockercfg-lh7k6); attempting to pull the image may not succeed. | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Created |
Created container oauth-apiserver | |
openshift-dns |
kubelet |
node-resolver-h4skh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (multus-ancillary-tools-dockercfg-xqf44); attempting to pull the image may not succeed. | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-multus |
kubelet |
multus-2r78s |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (default-dockercfg-mjrgj); attempting to pull the image may not succeed. | |
openshift-multus |
kubelet |
multus-2r78s |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (gcp-pd-csi-driver-node-sa-dockercfg-7kpqd); attempting to pull the image may not succeed. | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
node-resolver-h4skh |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (node-resolver-dockercfg-vtgpd); attempting to pull the image may not succeed. | |
openshift-cluster-node-tuning-operator |
performance-profile-controller |
cluster-node-tuning-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Created |
Created container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5f544c54d7-4lmsc became leader | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-cluster-csi-drivers |
pd.csi.storage.gke.io/1729775212845-9483-pd.csi.storage.gke.io |
pd-csi-storage-gke-io |
LeaderElection |
1729775212845-9483-pd-csi-storage-gke-io became leader | |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
Created |
Created container pod-identity-webhook | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
FailedToRetrieveImagePullSecret |
Unable to retrieve some image pull secrets (machine-config-daemon-dockercfg-bvmdb); attempting to pull the image may not succeed. |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:73be5ac2f0ac40f28ee7f3e9e2c72c9be7bd72d86150b4f94115af21f40122aa" in 3.113s (3.113s including waiting). Image size: 423225411 bytes. | |
| (x2) | openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
Created |
Created container cluster-version-operator |
| (x2) | openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
Started |
Started container cluster-version-operator |
openshift-cloud-credential-operator |
kubelet |
pod-identity-webhook-679666b9-lfgzj |
Started |
Started container pod-identity-webhook | |
openshift-cluster-version |
kubelet |
cluster-version-operator-59fc58bb8-h6cf2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 4.143s (4.143s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 4.062s (4.062s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Created |
Created container extract-content | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-machine-config-operator |
machine-config-operator |
openshift-machine-config-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Started |
Started container ovnkube-cluster-manager |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
ovnkube-control-plane-54656c84bd-cn29j became leader | |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-control-plane-54656c84bd-cn29j |
Created |
Created container ovnkube-cluster-manager |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 919ms (919ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 970ms (970ms including waiting). Image size: 896974229 bytes. | |
| (x7) | openshift-multus |
kubelet |
multus-vxgq8 |
BackOff |
Back-off restarting failed container kube-multus in pod multus-vxgq8_openshift-multus(550038ec-7b00-438d-901b-c21abb1de15a) |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Started |
Started container registry-server | |
| (x12) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Created |
Created container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-marketplace |
kubelet |
community-operators-2wjrj |
Created |
Created container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
| (x12) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddSigtermProtection |
Adding SIGTERM protection | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ConfigDriftMonitorStopped |
Config Drift Monitor stopped | |
| (x14) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-ts5rk |
Unhealthy |
Readiness probe failed: |
| (x152) | openshift-multus |
kubelet |
network-metrics-daemon-sfvgs |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x153) | openshift-network-diagnostics |
kubelet |
network-check-target-ztlz7 |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-cluster-csi-drivers |
gcp-pd-csi-driver-operator |
gcp-pd-csi-driver-operator-lock |
LeaderElection |
gcp-pd-csi-driver-operator-7ddb788594-zjfz2_9430c25f-edbf-4d2f-9b03-d9f17f725468 became leader | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Unhealthy |
Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of c7e00b6d41ce5a7bff21c208bb5587383c6b1b104c061197180dc556691e56ac is running failed: container process not found | |
openshift-marketplace |
kubelet |
community-operators-wj7jh |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-nln6m |
Killing |
Stopping container registry-server | |
| (x24) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz_openshift-machine-config-operator(d513cfd165b3637c3dee03e221dbb27d) |
| (x4) | openshift-multus |
kubelet |
multus-vxgq8 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_fba473d8-7c9f-437c-bddd-fb02cd9e3cda became leader | |
| (x5) | openshift-multus |
kubelet |
multus-vxgq8 |
Created |
Created container kube-multus |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
| (x5) | openshift-multus |
kubelet |
multus-vxgq8 |
Started |
Started container kube-multus |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-dcf867d89 to 0 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-dcf867d89 |
SuccessfulDelete |
Deleted pod: apiserver-dcf867d89-zrhwj | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-8bdbc6bbb to 3 from 2 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
| (x28) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
BackOff |
Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2_openshift-machine-config-operator(6edceae61b496c1230b0d63410fe98da) |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulCreate |
Created pod: apiserver-8bdbc6bbb-hgf9w | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Failed |
Error: ImagePullBackOff | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Failed |
Error: ErrImagePull | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
SkipReboot |
Config changes do not require reboot. | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
default |
ovnkube-csr-approver-controller |
csr-8tzfw |
CSRApproved |
CSR "csr-8tzfw" has been approved | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" | |
openshift-kube-controller-manager |
static-pod-installer |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 8 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-dcf867d89-zrhwj pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Failed |
Error: ErrImagePull | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RemoveSigtermProtection |
Removing SIGTERM protection | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Uncordon |
Update completed for config rendered-master-1f75404f08afc3926de8a846ea4bc6ff and node has been uncordoned |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Failed |
Error: ImagePullBackOff | |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-1f75404f08afc3926de8a846ea4bc6ff |
| (x2) | openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-master-1, currentConfig rendered-master-1f75404f08afc3926de8a846ea4bc6ff to Done |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:468a46c732664716f4b81ce3a23401278190f1edd19c90daa3ab5dba7fbdb7e2" in 34.332s (34.332s including waiting). Image size: 536898687 bytes. | |
openshift-dns |
kubelet |
node-resolver-h4skh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" in 35.517s (35.517s including waiting). Image size: 563905988 bytes. | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Failed |
Error: ImagePullBackOff | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Created |
Created container csi-driver | |
openshift-multus |
kubelet |
multus-2r78s |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" in 35.607s (35.607s including waiting). Image size: 1209582329 bytes. | |
openshift-dns |
kubelet |
node-resolver-h4skh |
Created |
Created container dns-node-resolver | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Failed |
Error: ErrImagePull | |
openshift-dns |
kubelet |
node-resolver-h4skh |
Started |
Started container dns-node-resolver | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Started |
Started container csi-driver | |
openshift-multus |
kubelet |
multus-2r78s |
Created |
Created container kube-multus | |
openshift-multus |
kubelet |
multus-2r78s |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
| (x2) | openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" in 7.495s (7.495s including waiting). Image size: 897148932 bytes. | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Started |
Started container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-9nnhr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:0f6079161de18da55f3189bddcfde1a22348b78a579b190465bda31a59e6b260" in 2.325s (2.325s including waiting). Image size: 680556885 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Started |
Started container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:aad1f78848feb211ade49ba7f56c32e9af371d72d62bf2fa64e79d3016d3ec2f" in 3.826s (3.826s including waiting). Image size: 396191352 bytes. | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Created |
Created container csi-node-driver-registrar | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Created |
Created container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Started |
Started container csi-liveness-probe | |
openshift-cluster-csi-drivers |
kubelet |
gcp-pd-csi-driver-node-j94ng |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9caefe481d8fe0d508a97fa090ea0c3d94fc39381e3c41f6a42bc8c174ff03db" in 2.714s (2.714s including waiting). Image size: 396574211 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" in 3.942s (3.942s including waiting). Image size: 487094132 bytes. | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
| (x2) | openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" |
| (x2) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08b1ec5a8e7a81d7bfe9d0649a7b3a3892d53ce3cc6ae3c82c51e81f20b62e96" in 1.114s (1.114s including waiting). Image size: 571426836 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" in 1.219s (1.219s including waiting). Image size: 1406971151 bytes. | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Started |
Started container kubecfg-setup | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Created |
Created container egress-router-binary-copy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Created |
Created container kubecfg-setup | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Created |
Created container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Started |
Started container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Created |
Created container kube-rbac-proxy-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-qfgpz |
Started |
Started container ovn-acl-logging | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ab5384c3fd93b40d299eb1ac0e115ffc0e96a14a88d2367d4c21ca4179a35a1d" in 6.462s (6.462s including waiting). Image size: 691795442 bytes. | |
default |
ovnkube-csr-approver-controller |
csr-xzdjd |
CSRApproved |
CSR "csr-xzdjd" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Created |
Created container cni-plugins | |
default |
ovnkube-csr-approver-controller |
csr-ld4rr |
CSRApproved |
CSR "csr-ld4rr" has been approved | |
default |
ovnkube-csr-approver-controller |
csr-5ssp6 |
CSRApproved |
CSR "csr-5ssp6" has been approved | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Started |
Started container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:225334ac4eacee6b93b19fa939b565b0e177d0da9d49e701ab4db7309253fcaa" in 7.543s (7.543s including waiting). Image size: 389927221 bytes. | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-9wwh9 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bb11b8d7f3699b6ca202deb5c5139690eb865df1fe778f37f3a88af35ba65225" in 5.087s (5.087s including waiting). Image size: 375717862 bytes. | |
default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
NodeReady |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x status is now: NodeReady | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-tqb4j |
Created |
Created container routeoverride-cni | |
default |
kubelet |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
NodeReady |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 status is now: NodeReady | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-xd48t | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 3 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-dcf867d89-zrhwj pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-dcf867d89-zrhwj pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
| (x3) | openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (3 endpoints, 2 zones), addressType: IPv4 |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-fg6jx | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-hhkt7 | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-hg5p9 | |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-k4f7q | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-dcf867d89-zrhwj pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
| (x6) | openshift-multus |
kubelet |
multus-hb5v6 |
BackOff |
Back-off restarting failed container kube-multus in pod multus-hb5v6_openshift-multus(74ff244e-961b-4ce6-8f05-36d016ff13c1) |
| (x7) | openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
BackOff |
Back-off restarting failed container cloud-controller-manager in pod gcp-cloud-controller-manager-6658458d69-bwn2n_openshift-cloud-controller-manager(6b0450f1-79b3-4d92-b591-7658e5e7f2d6) |
openshift-network-diagnostics |
multus |
network-check-target-ztlz7 |
AddedInterface |
Add eth0 [10.131.0.4/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-sfvgs |
AddedInterface |
Add eth0 [10.131.0.5/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-d5jsz |
AddedInterface |
Add eth0 [10.129.2.4/23] from ovn-kubernetes | |
| (x152) | openshift-network-diagnostics |
kubelet |
network-check-target-zh6rm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x152) | openshift-multus |
kubelet |
network-metrics-daemon-bmskb |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
| (x4) | openshift-multus |
kubelet |
multus-hb5v6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine |
| (x5) | openshift-multus |
kubelet |
multus-hb5v6 |
Started |
Started container kube-multus |
| (x5) | openshift-multus |
kubelet |
multus-hb5v6 |
Created |
Created container kube-multus |
default |
ovnkube-csr-approver-controller |
csr-tfv6j |
CSRApproved |
CSR "csr-tfv6j" has been approved | |
| (x5) | openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Created |
Created container cloud-controller-manager |
| (x5) | openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Started |
Started container cloud-controller-manager |
| (x4) | openshift-cloud-controller-manager |
kubelet |
gcp-cloud-controller-manager-6658458d69-bwn2n |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ef3d5fbb8b8ca09ab404e00e3d616471fdef91190d13610d028995b47d24b2be" already present on machine |
openshift-network-operator |
daemonset-controller |
iptables-alerter |
SuccessfulCreate |
Created pod: iptables-alerter-xkvzh | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-jm7sd | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-lmjwh | |
openshift-network-diagnostics |
multus |
network-check-target-vqt97 |
AddedInterface |
Add eth0 [10.129.2.3/23] from ovn-kubernetes | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_eaedefb2-e113-4795-83b9-72f9ea646d6f |
cluster-api-provider-nodelink-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_eaedefb2-e113-4795-83b9-72f9ea646d6f became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-786b85b959-zrm7s_233c8b46-eede-447b-8bf7-4ab6579eb72e became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-machine-api |
control-plane-machine-set-operator-7667c744f7-8tlf7_4ce101c3-c43d-4cbe-9960-8407fc3f61b5 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-7667c744f7-8tlf7_4ce101c3-c43d-4cbe-9960-8407fc3f61b5 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-6846798df4-kwxvp_17c14060-3871-4534-b949-337c3facff59 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" to "All is well",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
| (x23) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-7b64b578df-w9z5s_a5d85acd-0e5e-4ba7-8c30-82db8bd296f3 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-kube-scheduler-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:20.289363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:30.289105 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:40.289375 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:50.290106 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:13:00.289783 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:13:00.291016 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:13:00.291101 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-7bf6f695bf-4rjcs_802d8b8e-1643-4470-b252-b9dd0f1add3e became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
FastControllerResync |
Controller "openshift-apiserver-APIService" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:20.289363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:30.289105 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:40.289375 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:12:50.290106 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:13:00.289783 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:13:00.291016 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:13:00.291101 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-9bd7f8667-lfs5z_c5922506-d7f3-4fc9-9bea-3b9c8ad8a30a became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: refixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0 I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0 I1024 13:15:40.621111 1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false I1024 13:15:40.621191 1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0 I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0 F1024 13:16:24.639340 1 cmd.go:105] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-86c7d8d555-x49bl_86ed1b03-50b0-4bf0-a004-034377863f88 became leader | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-machine-api |
cluster-baremetal-operator-7648bf4f7c-nml8w_c661b7e5-1593-4260-b17f-2c3e9944435f |
cluster-baremetal-operator |
LeaderElection |
cluster-baremetal-operator-7648bf4f7c-nml8w_c661b7e5-1593-4260-b17f-2c3e9944435f became leader | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
| (x27) | openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-d8db88b9d-2pwz8_34eb3f79-c12b-453b-ad2e-9fbe3667fdd0 became leader | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-7949b5fbb4-gsbvx_64d084cc-80cb-4821-bd51-5ed9c24c2d52 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerWorkloadDegraded: " | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True (""),Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: route.route.openshift.io \"oauth-openshift\" not found" | |
openshift-cluster-machine-approver |
ci-op-2fcpj5j6-f6035-2lklf-master-1_48083428-3616-42af-a879-6ced86c468bc |
cluster-machine-approver-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_48083428-3616-42af-a879-6ced86c468bc became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_1013d650-1c1a-4f7c-9511-36a3813b09e1 became leader | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_542df196-2ff1-4ba6-82e6-a5a1190b309f |
cluster-api-provider-machineset-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_542df196-2ff1-4ba6-82e6-a5a1190b309f became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-59826e19ffd81ce395b52f6b2b19b336 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz, currentConfig rendered-worker-59826e19ffd81ce395b52f6b2b19b336 to Done | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Uncordon |
Update completed for config rendered-worker-59826e19ffd81ce395b52f6b2b19b336 and node has been uncordoned | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-85b957bbfc-dwcrh_e7966cc0-ee36-439f-be1f-99894857677b became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from True to False ("GCPPDCSIDriverOperatorCRDegraded: All is well") | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_651b7fe0-afd9-484f-918c-4e53a4793d94 |
cluster-api-provider-healthcheck-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_651b7fe0-afd9-484f-918c-4e53a4793d94 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-86f6b4f867-vvnvr_6a2400ad-c2a0-40dd-9e8a-3272277cff71 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from False to True ("GCPPDCSIDriverOperatorStaticControllerDegraded: \"csidriveroperators/gcp-pd/02_sa.yaml\" (string): rpc error: code = Unknown desc = malformed header: missing HTTP content-type\nGCPPDCSIDriverOperatorStaticControllerDegraded: \"csidriveroperators/gcp-pd/03_role.yaml\" (string): rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:59964->10.0.0.5:2379: read: connection timed out\nGCPPDCSIDriverOperatorStaticControllerDegraded: ") | |
openshift-cloud-controller-manager-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-1_80027165-f567-4528-b1c7-df8b9aa4e704 |
cluster-cloud-controller-manager-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_80027165-f567-4528-b1c7-df8b9aa4e704 became leader | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-network-diagnostics |
multus |
network-check-target-zh6rm |
AddedInterface |
Add eth0 [10.128.2.4/23] from ovn-kubernetes | |
openshift-multus |
multus |
network-metrics-daemon-bmskb |
AddedInterface |
Add eth0 [10.128.2.3/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
Uncordon |
Update completed for config rendered-worker-59826e19ffd81ce395b52f6b2b19b336 and node has been uncordoned | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2, currentConfig rendered-worker-59826e19ffd81ce395b52f6b2b19b336 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-59826e19ffd81ce395b52f6b2b19b336 | |
openshift-machine-api |
machine-api-controllers-7785d897-m4jlj_a828e785-8186-48e1-8ee1-826f8c236bb9 |
cluster-api-provider-gcp-leader |
LeaderElection |
machine-api-controllers-7785d897-m4jlj_a828e785-8186-48e1-8ee1-826f8c236bb9 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_78f9ba2d-98f7-4972-9111-385add4bcf9a became leader | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_a9c2119a-61e7-4f9c-bb68-65c27466d202 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_c0e1c13e-68ca-48e2-81e0-4e314fa9f2fe became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-749f4b99b7-fqnd2_abd6523a-0958-42d6-b47e-3ea5745d2a3c became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-cloud-controller-manager-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-1_bafaffff-192f-41f3-9532-28bba1d896b3 |
cluster-cloud-config-sync-leader |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_bafaffff-192f-41f3-9532-28bba1d896b3 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): rpc error: code = Unknown desc = malformed header: missing HTTP content-type\nBackingResourceControllerDegraded: \nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 7" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" image="registry.build02.ci.openshift.org/ci-op-2fcpj5j6/release@sha256:650171d292e22d33197d8e96d922613b50b4135ef9b5ae1581c858e3038de141" architecture="amd64" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): rpc error: code = Unknown desc = malformed header: missing HTTP content-type\nBackingResourceControllerDegraded: \nGuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.44/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-5b66777f7c-9pqmc_bb2ca6a6-560c-42d9-b8d5-a0e25fbdf939 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-5b66777f7c-9pqmc_bb2ca6a6-560c-42d9-b8d5-a0e25fbdf939 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdEndpointsDegraded: failed to get member list: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" to "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
etcd-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdEndpointsDegraded: failed to get member list: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-7bbcf99d5c-9746p_9ca2cd62-2569-4e04-9881-695eebe9adb6 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" to "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-vffs9 | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" to "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-kube-apiserver)\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-7c8c54f569-rsqg2_7af6841f-7287-4ea9-8410-4cdae8835d31 became leader | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-wzchn | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-vlzbv | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-k7nd8 | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-kqzbc | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-pvk4d | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/etcd-pod\": rpc error: code = Unknown desc = malformed header: missing HTTP content-type" to "BootstrapTeardownDegraded: error while updating NotEnoughEtcdMembers: client rate limiter Wait returned an error: context deadline exceeded\nEtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-etcd)\nEtcdStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveInternalRegistryHostnameChanged |
Internal registry hostname changed to "image-registry.openshift-image-registry.svc:5000" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://10.0.0.3:2379"), string("https://10.0.0.4:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), ...}, ...}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "imagePolicyConfig": map[string]any{ + "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), + }, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-ingress-canary |
default-scheduler |
ingress-canary-9wwh9 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-9wwh9 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-image-registry |
deployment-controller |
image-registry |
ScalingReplicaSet |
Scaled up replica set image-registry-7db6746b67 to 2 | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DeploymentCreated |
Created Deployment.apps/image-registry -n openshift-image-registry because it was missing | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79648c8fd6-swcgw to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-image-registry |
default-scheduler |
image-registry-7db6746b67-4gv7k |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-7db6746b67-4gv7k to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-dns |
default-scheduler |
dns-default-fg6jx |
Scheduled |
Successfully assigned openshift-dns/dns-default-fg6jx to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
| (x6) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
openshift-operator-lifecycle-manager |
default-scheduler |
collect-profiles-28829595-6kbmk |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-28829595-6kbmk to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted |
openshift-image-registry |
default-scheduler |
node-ca-pvk4d |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-pvk4d to ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-xd48t |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-xd48t to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 1 to 12 because static pod is ready | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-79648c8fd6-9gxqf to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-dns |
default-scheduler |
dns-default-hg5p9 |
Scheduled |
Successfully assigned openshift-dns/dns-default-hg5p9 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-image-registry |
default-scheduler |
node-ca-kqzbc |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-kqzbc to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-ingress-canary |
default-scheduler |
ingress-canary-hhkt7 |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-hhkt7 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-marketplace |
default-scheduler |
redhat-operators-s5p8x |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-s5p8x to ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-image-registry |
default-scheduler |
node-ca-k7nd8 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-k7nd8 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-network-operator |
default-scheduler |
iptables-alerter-k4f7q |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-k4f7q to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-xj76l |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-xj76l to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
| (x2) | openshift-image-registry |
controllermanager |
image-registry |
NoPods |
No matching pods found |
openshift-image-registry |
replicaset-controller |
image-registry-7db6746b67 |
SuccessfulCreate |
Created pod: image-registry-7db6746b67-87llm | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-5ff84586ff-b49fv |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-5ff84586ff-b49fv to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-image-registry |
default-scheduler |
node-ca-wzchn |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-wzchn to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-ingress |
default-scheduler |
router-default-bbcfc976b-xnpn7 |
Scheduled |
Successfully assigned openshift-ingress/router-default-bbcfc976b-xnpn7 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-dns |
default-scheduler |
dns-default-jm7sd |
Scheduled |
Successfully assigned openshift-dns/dns-default-jm7sd to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-8bdbc6bbb-hgf9w |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-8bdbc6bbb-hgf9w to ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-image-registry |
default-scheduler |
node-ca-vffs9 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-vffs9 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-apiserver |
default-scheduler |
apiserver-6d7dbc56c5-l698n |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6d7dbc56c5-l698n to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-ingress-canary |
default-scheduler |
ingress-canary-lmjwh |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-lmjwh to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_126dda2a-3f4f-4e7d-8a8a-a1b3a1e7b9fc became leader | |
openshift-image-registry |
replicaset-controller |
image-registry-7db6746b67 |
SuccessfulCreate |
Created pod: image-registry-7db6746b67-4gv7k | |
openshift-image-registry |
default-scheduler |
node-ca-vlzbv |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-vlzbv to ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-ingress |
default-scheduler |
router-default-bbcfc976b-4r8cp |
Scheduled |
Successfully assigned openshift-ingress/router-default-bbcfc976b-4r8cp to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-image-registry |
default-scheduler |
image-registry-7db6746b67-87llm |
Scheduled |
Successfully assigned openshift-image-registry/image-registry-7db6746b67-87llm to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-network-operator |
default-scheduler |
iptables-alerter-xkvzh |
Scheduled |
Successfully assigned openshift-network-operator/iptables-alerter-xkvzh to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Created |
Created container fix-audit-permissions | |
openshift-dns |
multus |
dns-default-fg6jx |
AddedInterface |
Add eth0 [10.129.2.6/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-image-registry |
multus |
image-registry-7db6746b67-87llm |
AddedInterface |
Add eth0 [10.129.2.9/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-87llm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-4gv7k |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/image-import-ca -n openshift-apiserver: cause by changes in data.image-registry.openshift-image-registry.svc..5000,data.image-registry.openshift-image-registry.svc.cluster.local..5000 | |
openshift-network-diagnostics |
kubelet |
network-check-source-5ff84586ff-b49fv |
Started |
Started container check-endpoints | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Started |
Started container extract-utilities | |
openshift-apiserver |
multus |
apiserver-6d7dbc56c5-l698n |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-marketplace |
multus |
redhat-operators-s5p8x |
AddedInterface |
Add eth0 [10.129.0.45/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-k7nd8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-image-registry |
multus |
image-registry-7db6746b67-4gv7k |
AddedInterface |
Add eth0 [10.131.0.10/23] from ovn-kubernetes | |
openshift-network-diagnostics |
kubelet |
network-check-source-5ff84586ff-b49fv |
Created |
Created container check-endpoints | |
openshift-marketplace |
multus |
redhat-marketplace-xj76l |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-kqzbc |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829595-6kbmk |
AddedInterface |
Add eth0 [10.131.0.11/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829595-6kbmk |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Created |
Created container extract-utilities | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DeploymentUpdated |
Updated Deployment.apps/image-registry -n openshift-image-registry because it changed | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Started |
Started container fix-audit-permissions | |
openshift-network-diagnostics |
kubelet |
network-check-source-5ff84586ff-b49fv |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c2c10d6d0c5508feaf80dbe5b76cc99fdee0a4c8171e0d9a031cdc4d74a35912" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Created |
Created container fix-audit-permissions | |
openshift-network-operator |
kubelet |
iptables-alerter-xd48t |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Started |
Started container extract-utilities | |
openshift-oauth-apiserver |
multus |
apiserver-8bdbc6bbb-hgf9w |
AddedInterface |
Add eth0 [10.130.0.68/23] from ovn-kubernetes | |
openshift-network-diagnostics |
multus |
network-check-source-5ff84586ff-b49fv |
AddedInterface |
Add eth0 [10.131.0.9/23] from ovn-kubernetes | |
openshift-dns |
multus |
dns-default-hg5p9 |
AddedInterface |
Add eth0 [10.131.0.6/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-4r8cp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e5e5ffff50ae95ff88fb46c439b222e3c169f219d7b6a424a857e4d3c3b87db5" | |
openshift-ingress |
multus |
router-default-bbcfc976b-4r8cp |
AddedInterface |
Add eth0 [10.129.2.8/23] from ovn-kubernetes | |
openshift-image-registry |
kubelet |
node-ca-vffs9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-image-registry |
kubelet |
node-ca-vlzbv |
FailedMount |
MountVolume.SetUp failed for volume "serviceca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-image-registry |
kubelet |
node-ca-pvk4d |
FailedMount |
MountVolume.SetUp failed for volume "serviceca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-dns |
multus |
dns-default-jm7sd |
AddedInterface |
Add eth0 [10.128.2.7/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-dns |
kubelet |
dns-default-jm7sd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Started |
Started container oauth-apiserver | |
openshift-network-operator |
kubelet |
iptables-alerter-xkvzh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
FailedMount |
MountVolume.SetUp failed for volume "tls-certificates" : failed to sync secret cache: timed out waiting for the condition | |
openshift-dns |
kubelet |
dns-default-jm7sd |
FailedMount |
MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-image-registry |
kubelet |
node-ca-pvk4d |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:79ff0a2d212ea579cd1ad51d5db86ac45e0f907380bcf75a3c9d1a164bced808" | |
openshift-image-registry |
kubelet |
node-ca-wzchn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.46/23] from ovn-kubernetes | |
openshift-ingress |
multus |
router-default-bbcfc976b-xnpn7 |
AddedInterface |
Add eth0 [10.128.2.6/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wktf2" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Started |
Started container openshift-apiserver | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Created |
Created container machine-config-daemon |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-8lq4q |
Started |
Started container machine-config-daemon |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Created |
Created container openshift-apiserver | |
openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-network-operator |
kubelet |
iptables-alerter-k4f7q |
FailedMount |
MountVolume.SetUp failed for volume "iptables-alerter-script" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-network-operator |
kubelet |
iptables-alerter-k4f7q |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2281a4b71c7b7f262260cdc3e2b091d20fab875a1c81c593e1a51d7f17fa2a34" already present on machine | |
openshift-machine-api |
cluster-autoscaler-operator-776f9d4bf4-dthxh_dc68f770-85a6-4be9-ae6d-8ba388f2eee0 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-776f9d4bf4-dthxh_dc68f770-85a6-4be9-ae6d-8ba388f2eee0 became leader | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-xnpn7 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e5e5ffff50ae95ff88fb46c439b222e3c169f219d7b6a424a857e4d3c3b87db5" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
AddedInterface |
Add eth0 [10.131.0.7/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.111s (1.111s including waiting). Image size: 967040755 bytes. | |
openshift-apiserver |
replicaset-controller |
apiserver-6d7dbc56c5 |
SuccessfulDelete |
Deleted pod: apiserver-6d7dbc56c5-l698n | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Created |
Created container extract-content | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." | |
openshift-network-operator |
kubelet |
iptables-alerter-xd48t |
Started |
Started container iptables-alerter | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-apiserver |
replicaset-controller |
apiserver-67f7894794 |
SuccessfulCreate |
Created pod: apiserver-67f7894794-jl9rf | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-67f7894794 to 1 from 0 | |
openshift-image-registry |
kubelet |
node-ca-vlzbv |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d7dbc56c5 to 1 from 2 | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:79ff0a2d212ea579cd1ad51d5db86ac45e0f907380bcf75a3c9d1a164bced808" | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "apiServerArguments": map[string]any{"feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}}, + "imagePolicyConfig": map[string]any{ + "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), + }, "projectConfig": map[string]any{"projectRequestMessage": string("")}, "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, ... // 2 identical entries } | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
AddedInterface |
Add eth0 [10.129.2.7/23] from ovn-kubernetes | |
openshift-network-operator |
kubelet |
iptables-alerter-xd48t |
Created |
Created container iptables-alerter | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
openshift-network-operator |
kubelet |
iptables-alerter-xkvzh |
Created |
Created container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-xkvzh |
Started |
Started container iptables-alerter | |
openshift-image-registry |
kubelet |
node-ca-kqzbc |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 3.042s (3.042s including waiting). Image size: 472728551 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nSATokenSignerDegraded: Operation cannot be fulfilled on secrets \"service-account-private-key\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: configmaps: cluster-policy-controller-config-9,config-9,controller-manager-kubeconfig-9,kube-controller-cert-syncer-kubeconfig-9,kube-controller-manager-pod-9,recycler-config-9,service-ca-9,serviceaccount-ca-9",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 9" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
| (x2) | openshift-apiserver |
default-scheduler |
apiserver-67f7894794-jl9rf |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-image-registry |
kubelet |
node-ca-kqzbc |
Created |
Created container node-ca | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-7c885b8899-z89zf_7b42a41a-ced4-472b-876b-0679545bc14f became leader | |
openshift-image-registry |
kubelet |
node-ca-kqzbc |
Started |
Started container node-ca | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps kube-apiserver-pod)" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Created |
Created container extract-content | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 10 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-l698n |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-network-operator |
kubelet |
iptables-alerter-k4f7q |
Created |
Created container iptables-alerter | |
openshift-network-operator |
kubelet |
iptables-alerter-k4f7q |
Started |
Started container iptables-alerter | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetUpdated |
Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Started |
Started container extract-content | |
openshift-image-registry |
kubelet |
node-ca-pvk4d |
Started |
Started container node-ca | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 3.09s (3.09s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 900ms (900ms including waiting). Image size: 896974229 bytes. | |
openshift-cloud-controller-manager |
cloud-controller-manager |
cloud-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_9a42ab72-4b8e-449d-9976-f124938781d9 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-image-registry |
kubelet |
node-ca-pvk4d |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 2.254s (2.254s including waiting). Image size: 472728551 bytes. | |
openshift-image-registry |
kubelet |
node-ca-pvk4d |
Created |
Created container node-ca | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-ingress |
service-controller |
router-default |
EnsuringLoadBalancer |
Ensuring load balancer | |
openshift-image-registry |
kubelet |
node-ca-vlzbv |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-vlzbv |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
node-ca-vlzbv |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 3.016s (3.016s including waiting). Image size: 472728551 bytes. | |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
ProbeError |
Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused body: |
| (x3) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Unhealthy |
Liveness probe failed: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Killing |
Container machine-config-daemon failed liveness probe, will be restarted | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Created |
Created container dns | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
Created |
Created container prometheus-operator-admission-webhook | |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulCreate |
Created pod: apiserver-6d6946f85d-wdq7x | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 6.03s (6.03s including waiting). Image size: 464555200 bytes. | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:79ff0a2d212ea579cd1ad51d5db86ac45e0f907380bcf75a3c9d1a164bced808" in 5.52s (5.52s including waiting). Image size: 426356998 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
Created |
Created container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-9gxqf |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 3 to 12 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 3 is the oldest | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:79ff0a2d212ea579cd1ad51d5db86ac45e0f907380bcf75a3c9d1a164bced808" in 4.295s (4.295s including waiting). Image size: 426356998 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-79648c8fd6-swcgw |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-67f7894794 to 0 from 1 | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-4gv7k |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 6.026s (6.026s including waiting). Image size: 472728551 bytes. | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-4gv7k |
Created |
Created container registry | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-4gv7k |
Started |
Started container registry | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-87llm |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 5.666s (5.666s including waiting). Image size: 472728551 bytes. | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-87llm |
Created |
Created container registry | |
openshift-image-registry |
kubelet |
image-registry-7db6746b67-87llm |
Started |
Started container registry | |
openshift-image-registry |
kubelet |
node-ca-k7nd8 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 6.003s (6.003s including waiting). Image size: 472728551 bytes. | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d6946f85d to 1 from 0 | |
openshift-image-registry |
kubelet |
node-ca-k7nd8 |
Started |
Started container node-ca | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_6fc53577-39fe-4301-829e-fd39fca05a76 became leader | |
openshift-network-operator |
cluster-network-operator |
network-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.222s (1.222s including waiting). Image size: 896974229 bytes. | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c287bd87edc928918bc0e8ff3d8a3be9c656bd0190d636773469cae19558ad69" already present on machine |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Created |
Created container machine-config-daemon |
openshift-image-registry |
kubelet |
node-ca-vffs9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 6.194s (6.194s including waiting). Image size: 472728551 bytes. | |
openshift-image-registry |
kubelet |
node-ca-vffs9 |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
node-ca-vffs9 |
Started |
Started container node-ca | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-4r8cp |
Started |
Started container router | |
openshift-image-registry |
kubelet |
node-ca-wzchn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:d14423996e5784ccf8ab0fdaf7e9aac6f561c03d4dad8bb3d99e1d1040872c21" in 5.247s (5.247s including waiting). Image size: 472728551 bytes. | |
openshift-image-registry |
kubelet |
node-ca-k7nd8 |
Created |
Created container node-ca | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-4r8cp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e5e5ffff50ae95ff88fb46c439b222e3c169f219d7b6a424a857e4d3c3b87db5" in 5.953s (5.953s including waiting). Image size: 476092374 bytes. | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-4r8cp |
Created |
Created container router | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 6.166s (6.166s including waiting). Image size: 464555200 bytes. | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-xnpn7 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e5e5ffff50ae95ff88fb46c439b222e3c169f219d7b6a424a857e4d3c3b87db5" in 4.972s (4.972s including waiting). Image size: 476092374 bytes. | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-xnpn7 |
Created |
Created container router | |
openshift-ingress |
kubelet |
router-default-bbcfc976b-xnpn7 |
Started |
Started container router | |
openshift-apiserver |
replicaset-controller |
apiserver-67f7894794 |
SuccessfulDelete |
Deleted pod: apiserver-67f7894794-jl9rf | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Started |
Started container dns | |
| (x2) | openshift-machine-config-operator |
kubelet |
machine-config-daemon-69dkf |
Started |
Started container machine-config-daemon |
openshift-dns |
kubelet |
dns-default-jm7sd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:88e1b6af0598d25afd13127b81578a48980897973df874082b03ff8b3c7fe155" in 4.711s (4.711s including waiting). Image size: 464555200 bytes. | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-j56tz | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-r5t4v | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-dns |
kubelet |
dns-default-hg5p9 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-10 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-image-registry |
kubelet |
node-ca-wzchn |
Created |
Created container node-ca | |
openshift-image-registry |
kubelet |
node-ca-wzchn |
Started |
Started container node-ca | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-r5t4v |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-r5t4v to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-bmvdq |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-bmvdq to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 8 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Created |
Created container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-fg6jx |
Created |
Created container kube-rbac-proxy | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: route \"openshift-authentication/oauth-openshift\": status does not have a valid host address\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: no ingress for host oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX in route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-bmvdq | |
openshift-multus |
default-scheduler |
cni-sysctl-allowlist-ds-j56tz |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-j56tz to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Started |
Started container dns | |
openshift-dns |
kubelet |
dns-default-jm7sd |
Created |
Created container dns | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nInstallerControllerDegraded: missing required resources: configmaps: cluster-policy-controller-config-9,config-9,controller-manager-kubeconfig-9,kube-controller-cert-syncer-kubeconfig-9,kube-controller-manager-pod-9,recycler-config-9,service-ca-9,serviceaccount-ca-9" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
FailedMount |
MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-j56tz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-r5t4v |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-j56tz |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-j56tz |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-r5t4v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-hgf9w pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-679f7fdbbc to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-operator -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-r5t4v |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it was missing | |
openshift-monitoring |
multus |
prometheus-operator-679f7fdbbc-4kz9l |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9eecddd60e53514154beae10a4edea87215f62baff61c37fc232c42d5592750f" | |
openshift-ingress |
service-controller |
router-default |
EnsuredLoadBalancer |
Ensured load balancer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-etcd because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-7b5fc84cb4 to 1 | |
openshift-console-operator |
replicaset-controller |
console-operator-7b5fc84cb4 |
SuccessfulCreate |
Created pod: console-operator-7b5fc84cb4-6gw57 | |
openshift-etcd |
multus |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.47/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
prometheus-operator-679f7fdbbc-4kz9l |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-679f7fdbbc-4kz9l to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-console-operator |
default-scheduler |
console-operator-7b5fc84cb4-6gw57 |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-7b5fc84cb4-6gw57 to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-679f7fdbbc |
SuccessfulCreate |
Created pod: prometheus-operator-679f7fdbbc-4kz9l | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{string("https://10.0.0.3:2379"), string("https://10.0.0.4:2379"), string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), ...}, ...}, + "authConfig": map[string]any{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, ... // 2 identical entries } |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-openshift-authentication-metadata |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-kube-controller-manager |
multus |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-786d8fdc94 to 3 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-10 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-j56tz |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulCreate |
Created pod: oauth-openshift-786d8fdc94-dfr9r | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-8 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-console-operator |
kubelet |
console-operator-7b5fc84cb4-6gw57 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : configmap references non-existent config key: ca-bundle.crt |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulCreate |
Created pod: oauth-openshift-786d8fdc94-dz6zh | |
openshift-authentication |
multus |
oauth-openshift-786d8fdc94-dfr9r |
AddedInterface |
Add eth0 [10.129.0.48/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829595-6kbmk |
Started |
Started container collect-profiles | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dfr9r |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" | |
openshift-authentication |
default-scheduler |
oauth-openshift-786d8fdc94-dfr9r |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-786d8fdc94-dfr9r to ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-authentication |
default-scheduler |
oauth-openshift-786d8fdc94-k6wq9 |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-786d8fdc94-k6wq9 to ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-authentication |
default-scheduler |
oauth-openshift-786d8fdc94-dz6zh |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-786d8fdc94-dz6zh to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-authentication |
multus |
oauth-openshift-786d8fdc94-k6wq9 |
AddedInterface |
Add eth0 [10.130.0.69/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-786d8fdc94-dz6zh |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerWorkloadDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerWorkloadDegraded: \nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)",Progressing message changed from "" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-r5t4v |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-kube-controller-manager |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": dial tcp 35.184.254.253:443: connect: connection refused\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829595-6kbmk |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" in 10.62s (10.62s including waiting). Image size: 841241863 bytes. | |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulCreate |
Created pod: oauth-openshift-786d8fdc94-k6wq9 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829595-6kbmk |
Created |
Created container collect-profiles | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-k6wq9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" | |
openshift-kube-controller-manager |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Created |
Created container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Started |
Started container prometheus-operator | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-marketplace |
kubelet |
redhat-marketplace-xj76l |
Killing |
Stopping container registry-server | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-10 -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9eecddd60e53514154beae10a4edea87215f62baff61c37fc232c42d5592750f" in 2.217s (2.217s including waiting). Image size: 443267580 bytes. | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Started |
Started container kube-rbac-proxy | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dz6zh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-679f7fdbbc-4kz9l |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-8 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-s5p8x |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-operator-lifecycle-manager |
package-server-manager-f7554d4b7-xd4h9_4cec1650-e89e-4f53-83b9-55aedb1f0339 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-f7554d4b7-xd4h9_4cec1650-e89e-4f53-83b9-55aedb1f0339 became leader | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Uncordon |
Update completed for config rendered-worker-59826e19ffd81ce395b52f6b2b19b336 and node has been uncordoned | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/openshift-state-metrics -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kube-state-metrics-custom-resource-state-configmap -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-bundle -n openshift-monitoring because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dz6zh |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-k6wq9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" in 3.055s (3.055s including waiting). Image size: 453141327 bytes. | |
openshift-console-operator |
kubelet |
console-operator-7b5fc84cb4-6gw57 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9a4c80a4ff392dc7453ccf5a1eb1fbf3f5a66954e4aa4a03526f766abf9b49af" | |
openshift-console-operator |
multus |
console-operator-7b5fc84cb4-6gw57 |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dz6zh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" in 2.539s (2.539s including waiting). Image size: 453141327 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:aggregated-metrics-reader because it was missing | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5487c6b79d to 1 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5487c6b79d |
SuccessfulCreate |
Created pod: openshift-state-metrics-5487c6b79d-v5ttw | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/node-exporter -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-7b57c756c4-dw6rz |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-7b57c756c4-dw6rz to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-7b57c756c4 |
SuccessfulCreate |
Created pod: kube-state-metrics-7b57c756c4-dw6rz | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alert-routing-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/prometheus-k8s because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/cluster-monitoring-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/node-exporter because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/openshift-state-metrics because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz, currentConfig rendered-worker-59826e19ffd81ce395b52f6b2b19b336 to Done | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-5487c6b79d-v5ttw |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5487c6b79d-v5ttw to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/kube-state-metrics because it was missing | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-59826e19ffd81ce395b52f6b2b19b336 | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dz6zh |
Started |
Started container oauth-openshift | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-7b57c756c4 to 1 | |
openshift-monitoring |
multus |
kube-state-metrics-7b57c756c4-dw6rz |
AddedInterface |
Add eth0 [10.131.0.12/23] from ovn-kubernetes | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-vbb87 | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-monitoring |
default-scheduler |
node-exporter-wpqcz |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-wpqcz to ci-op-2fcpj5j6-f6035-2lklf-master-1 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-t7shb | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829595 |
Completed |
Job completed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/pod-metrics-reader because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:metrics-server because it was missing | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829595, condition: Complete | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:8304f31722cebcf73cfd437ba6acf9a1e8e36d10a908000e910d01d1b923fa5c" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c8e07acbed793097aa2efe10ff9260eee2251280cedcd503aa97a377f7ebcfb7" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Created |
Created container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Created |
Created container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
multus |
openshift-state-metrics-5487c6b79d-v5ttw |
AddedInterface |
Add eth0 [10.128.2.9/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
node-exporter-wc5hp |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-wc5hp to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-l7mr9 | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-wc5hp | |
| (x2) | openshift-monitoring |
controllermanager |
alertmanager-main |
NoPods |
No matching pods found |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-7lhkb | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.163.188:443/healthz\": dial tcp 172.30.163.188:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-k6wq9 |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-k6wq9 |
Created |
Created container oauth-openshift | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-monitoring |
default-scheduler |
node-exporter-vbb87 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-vbb87 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-wpqcz | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dfr9r |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dfr9r |
Created |
Created container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-8 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dfr9r |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" in 3.194s (3.194s including waiting). Image size: 453141327 bytes. | |
openshift-monitoring |
default-scheduler |
node-exporter-7lhkb |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-7lhkb to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-monitoring |
default-scheduler |
node-exporter-l7mr9 |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-l7mr9 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-monitoring |
default-scheduler |
node-exporter-t7shb |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-t7shb to ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-7fbb585d7c to 1 from 0 | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-10 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-786d8fdc94 to 2 from 3 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-7fbb585d7c |
SuccessfulCreate |
Created pod: oauth-openshift-7fbb585d7c-2g9nh | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-786d8fdc94-dfr9r pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulDelete |
Deleted pod: oauth-openshift-786d8fdc94-dfr9r | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container guard | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
multus |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/user-workload-monitoring-config-edit -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-edit because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/metrics-server -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreateFailed |
Failed to create RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring: roles.rbac.authorization.k8s.io "monitoring-alertmanager-edit" not found | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/alertmanager-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/metrics-server-auth-reader -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/alertmanager-main -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-kube-rbac-proxy-web -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/thanos-querier because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/monitoring-rules-view because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/alertmanager-main because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/kube-rbac-proxy -n openshift-monitoring because it was missing | |
openshift-console-operator |
kubelet |
console-operator-7b5fc84cb4-6gw57 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:9a4c80a4ff392dc7453ccf5a1eb1fbf3f5a66954e4aa4a03526f766abf9b49af" in 3.392s (3.392s including waiting). Image size: 482385085 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-view -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 1.678s (1.678s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Started |
Started container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Started |
Started container init-textfile | |
openshift-monitoring |
default-scheduler |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-8 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.2.10/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-reader -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-api-writer -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/monitoring-alertmanager-edit -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 1.586s (1.586s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/cluster-monitoring-metrics-api -n openshift-monitoring because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dfr9r |
Killing |
Stopping container oauth-openshift | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 1.751s (1.751s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 1.956s (1.956s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 1.709s (1.709s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Created |
Created container init-textfile | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Started |
Started container init-textfile | |
openshift-monitoring |
default-scheduler |
alertmanager-main-1 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-1 to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
multus |
alertmanager-main-1 |
AddedInterface |
Add eth0 [10.129.2.11/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Started |
Started container init-textfile | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-console-operator |
kubelet |
console-operator-7b5fc84cb4-6gw57 |
Created |
Created container console-operator | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-l7mr9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-machine-api |
machineapioperator |
machine-api-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator |
console-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7b5fc84cb4-6gw57_7fa9f439-fdad-43ae-98df-0539f760e71f became leader | |
openshift-console-operator |
kubelet |
console-operator-7b5fc84cb4-6gw57 |
Started |
Started container console-operator | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Started |
Started container node-exporter | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (4 endpoints, 3 zones), addressType: IPv4 | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest | |
openshift-monitoring |
kubelet |
node-exporter-7lhkb |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-multus |
replicaset-controller |
multus-admission-controller-84d87497df |
SuccessfulCreate |
Created pod: multus-admission-controller-84d87497df-hgz5c | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-84d87497df to 1 | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Created |
Created container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Created |
Created container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Started |
Started container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Created |
Created container kube-state-metrics | |
openshift-monitoring |
kubelet |
kube-state-metrics-7b57c756c4-dw6rz |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:8304f31722cebcf73cfd437ba6acf9a1e8e36d10a908000e910d01d1b923fa5c" in 2.995s (2.995s including waiting). Image size: 420339512 bytes. | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-wpqcz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Created |
Created container node-exporter | |
openshift-multus |
default-scheduler |
multus-admission-controller-84d87497df-hgz5c |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-84d87497df-hgz5c to ci-op-2fcpj5j6-f6035-2lklf-master-2 | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-t7shb |
Started |
Started container kube-rbac-proxy | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from Unknown to False ("All is well"),Progressing changed from Unknown to False ("All is well"),Upgradeable changed from Unknown to True ("All is well"),status.relatedObjects changed from [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/thanos-querier-grpc-tls-cipqs6ec23e6s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/metrics-server-audit-profiles -n openshift-monitoring because it was missing | |
openshift-monitoring |
replicaset-controller |
thanos-querier-546ff6759f |
SuccessfulCreate |
Created pod: thanos-querier-546ff6759f-bgh62 | |
openshift-monitoring |
replicaset-controller |
thanos-querier-546ff6759f |
SuccessfulCreate |
Created pod: thanos-querier-546ff6759f-58hxn | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" in 2.345s (2.345s including waiting). Image size: 419738919 bytes. | |
| (x2) | openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 |
openshift-multus |
multus |
multus-admission-controller-84d87497df-hgz5c |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-worker-59826e19ffd81ce395b52f6b2b19b336 | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
NodeDone |
Setting node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2, currentConfig rendered-worker-59826e19ffd81ce395b52f6b2b19b336 to Done | |
openshift-machine-config-operator |
machineconfigdaemon |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
Uncordon |
Update completed for config rendered-worker-59826e19ffd81ce395b52f6b2b19b336 and node has been uncordoned | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c8e07acbed793097aa2efe10ff9260eee2251280cedcd503aa97a377f7ebcfb7" in 3.852s (3.852s including waiting). Image size: 411763411 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
node-exporter-vbb87 |
Started |
Started container kube-rbac-proxy | |
openshift-console |
replicaset-controller |
downloads-6957cb85f9 |
SuccessfulCreate |
Created pod: downloads-6957cb85f9-5bf4w | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" in 2.408s (2.408s including waiting). Image size: 390679664 bytes. | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/grpc-tls -n openshift-monitoring because it was missing | |
openshift-console-operator |
console-operator-oauthclient-secret-controller-oauthclientsecretcontroller |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceAccountCreated |
Created ServiceAccount/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
default-scheduler |
thanos-querier-546ff6759f-bgh62 |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-546ff6759f-bgh62 to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),EvaluationConditionsDetected set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
| (x2) | openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsDisabled |
Unable to allocate minimum required endpoints to each zone without exceeding overload threshold (5 endpoints, 3 zones), addressType: IPv4 |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Created |
Created container openshift-state-metrics | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5487c6b79d-v5ttw |
Started |
Started container openshift-state-metrics | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Created |
Created container init-textfile | |
| (x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console |
default-scheduler |
downloads-6957cb85f9-5bf4w |
Scheduled |
Successfully assigned openshift-console/downloads-6957cb85f9-5bf4w to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-console |
multus |
downloads-6957cb85f9-5bf4w |
AddedInterface |
Add eth0 [10.131.0.13/23] from ovn-kubernetes | |
openshift-console |
default-scheduler |
downloads-6957cb85f9-qq2b2 |
Scheduled |
Successfully assigned openshift-console/downloads-6957cb85f9-qq2b2 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-console |
replicaset-controller |
downloads-6957cb85f9 |
SuccessfulCreate |
Created pod: downloads-6957cb85f9-qq2b2 | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Started |
Started container init-textfile | |
openshift-console |
multus |
downloads-6957cb85f9-qq2b2 |
AddedInterface |
Add eth0 [10.128.2.11/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
thanos-querier-546ff6759f-58hxn |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-546ff6759f-58hxn to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3b2f91a4214d4a84c837d43bc8fec289635792dcaf0bfba7246e4f905e8d9af" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Created |
Created container node-exporter | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-10 -n openshift-kube-controller-manager because it was missing | |
| (x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-6957cb85f9 to 2 | |
openshift-monitoring |
kubelet |
node-exporter-wc5hp |
Started |
Started container node-exporter | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-546ff6759f to 2 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-7fbc5f4744 |
SuccessfulCreate |
Created pod: monitoring-plugin-7fbc5f4744-phx4h | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" | |
openshift-monitoring |
multus |
thanos-querier-546ff6759f-58hxn |
AddedInterface |
Add eth0 [10.128.2.12/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container init-config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-786d8fdc94-dfr9r pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" | |
openshift-monitoring |
multus |
thanos-querier-546ff6759f-bgh62 |
AddedInterface |
Add eth0 [10.131.0.14/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container init-config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-786d8fdc94-dfr9r pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-786d8fdc94-dfr9r pod)" | |
openshift-monitoring |
deployment-controller |
monitoring-plugin |
ScalingReplicaSet |
Scaled up replica set monitoring-plugin-7fbc5f4744 to 2 | |
openshift-monitoring |
replicaset-controller |
monitoring-plugin-7fbc5f4744 |
SuccessfulCreate |
Created pod: monitoring-plugin-7fbc5f4744-kz226 | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-7fbc5f4744-kz226 |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-7fbc5f4744-kz226 to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-monitoring |
multus |
monitoring-plugin-7fbc5f4744-phx4h |
AddedInterface |
Add eth0 [10.131.0.15/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
monitoring-plugin-7fbc5f4744-phx4h |
Scheduled |
Successfully assigned openshift-monitoring/monitoring-plugin-7fbc5f4744-phx4h to ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x | |
openshift-monitoring |
multus |
monitoring-plugin-7fbc5f4744-kz226 |
AddedInterface |
Add eth0 [10.128.2.13/23] from ovn-kubernetes | |
openshift-monitoring |
replicaset-controller |
metrics-server-58f6d575c4 |
SuccessfulCreate |
Created pod: metrics-server-58f6d575c4-qj8vk | |
openshift-monitoring |
default-scheduler |
metrics-server-58f6d575c4-k7fwq |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-58f6d575c4-k7fwq to ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-786d8fdc94-dfr9r pod)" to "All is well",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-3g41mr2412eu -n openshift-monitoring because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" in 2.402s (2.402s including waiting). Image size: 436172339 bytes. | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-10 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
replicaset-controller |
metrics-server-58f6d575c4 |
SuccessfulCreate |
Created pod: metrics-server-58f6d575c4-k7fwq | |
openshift-monitoring |
default-scheduler |
metrics-server-58f6d575c4-qj8vk |
Scheduled |
Successfully assigned openshift-monitoring/metrics-server-58f6d575c4-qj8vk to ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-58f6d575c4 to 2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container alertmanager | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Created |
Created container multus-admission-controller | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" | |
openshift-monitoring |
multus |
metrics-server-58f6d575c4-qj8vk |
AddedInterface |
Add eth0 [10.129.2.12/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" | |
openshift-monitoring |
multus |
metrics-server-58f6d575c4-k7fwq |
AddedInterface |
Add eth0 [10.128.2.14/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container alertmanager | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Started |
Started container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Killing |
Stopping container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" in 2.45s (2.45s including waiting). Image size: 449422523 bytes. | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-5zx7k |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-84d87497df to 2 from 1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-749bf6f86d |
SuccessfulDelete |
Deleted pod: multus-admission-controller-749bf6f86d-5zx7k | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
PodUpdated |
Updated Pod/kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it changed | |
openshift-multus |
replicaset-controller |
multus-admission-controller-84d87497df |
SuccessfulCreate |
Created pod: multus-admission-controller-84d87497df-2ktnx | |
openshift-multus |
default-scheduler |
multus-admission-controller-84d87497df-2ktnx |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-84d87497df-2ktnx to ci-op-2fcpj5j6-f6035-2lklf-master-0 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" in 4.815s (4.815s including waiting). Image size: 419738919 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container init-config-reloader | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-749bf6f86d to 1 from 2 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-hgz5c |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Created |
Created container multus-admission-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulCreate |
Created pod: route-controller-manager-67866594b6-phrd7 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulCreate |
Created pod: controller-manager-795448867c-2ht6p | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5f544c54d7 to 2 from 3 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-795448867c to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-d8db88b9d to 2 from 3 | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Started |
Started container kube-rbac-proxy | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67866594b6 to 1 from 0 | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5f544c54d7-4lmsc stopped leading | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-2pwz8 |
Killing |
Stopping container route-controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ Â Â "build": map[string]any{"buildDefaults": map[string]any{"resources": map[string]any{}}, "imageTemplateFormat": map[string]any{"format": string("registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c"...)}}, Â Â "controllers": []any{string("openshift.io/build"), string("openshift.io/build-config-change"), string("openshift.io/builder-rolebindings"), string("openshift.io/builder-serviceaccount"), ...}, Â Â "deployer": map[string]any{"imageTemplateFormat": map[string]any{"format": string("registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a"...)}}, +Â "dockerPullSecret": map[string]any{ +Â "internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000"), +Â }, Â Â "featureGates": []any{string("BuildCSIVolumes=true")}, Â Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-multus |
multus |
multus-admission-controller-84d87497df-2ktnx |
AddedInterface |
Add eth0 [10.129.0.49/23] from ovn-kubernetes | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulDelete |
Deleted pod: controller-manager-5f544c54d7-4lmsc | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 10 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-67866594b6-phrd7 |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml | |
openshift-multus |
kubelet |
multus-admission-controller-84d87497df-2ktnx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:cf530d75ebe9448c4a2bd7adfbd3a013ea5d8beba787d1916fe4cd502199d660" already present on machine | |
openshift-controller-manager |
default-scheduler |
controller-manager-795448867c-2ht6p |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulDelete |
Deleted pod: route-controller-manager-d8db88b9d-2pwz8 | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-4lmsc |
Killing |
Stopping container controller-manager | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-retry-2-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
| (x2) | openshift-authentication |
default-scheduler |
oauth-openshift-7fbb585d7c-2g9nh |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
| (x5) | openshift-apiserver |
default-scheduler |
apiserver-6d6946f85d-wdq7x |
FailedScheduling |
0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/6 nodes are available: 3 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules. |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-client |
etcd-operator |
MemberRemove |
removed member with ID: 11857714448295288924 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-8 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed | |
openshift-multus |
replicaset-controller |
multus-admission-controller-749bf6f86d |
SuccessfulDelete |
Deleted pod: multus-admission-controller-749bf6f86d-f9cds | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Killing |
Stopping container kube-rbac-proxy | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-749bf6f86d to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-749bf6f86d-f9cds |
Killing |
Stopping container multus-admission-controller | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" in 2.104s (2.104s including waiting). Image size: 392884958 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Started |
Started container metrics-server | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 10" | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" in 2.394s (2.394s including waiting). Image size: 451216469 bytes. | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Created |
Created container metrics-server | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" in 6.365s (6.365s including waiting). Image size: 506290318 bytes. | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3." to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container thanos-query | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-8 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" | |
openshift-kube-controller-manager |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container installer | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-8 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX |
openshift-console |
replicaset-controller |
console-5ff7f7597d |
SuccessfulCreate |
Created pod: console-5ff7f7597d-w7z9h | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5ff7f7597d to 2 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console |
replicaset-controller |
console-5ff7f7597d |
SuccessfulCreate |
Created pod: console-5ff7f7597d-qc5rb | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-8 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-8 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" in 7.444s (7.444s including waiting). Image size: 392884958 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-8 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-bgh62 |
Created |
Created container kube-rbac-proxy-metrics | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5f544c54d7-vlsx8 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 9 triggered by "required configmap/config has changed" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 8 triggered by "required configmap/config has changed" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-controller-manager |
multus |
controller-manager-795448867c-2ht6p |
AddedInterface |
Add eth0 [10.129.0.50/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_df2a5041-1d0b-4f8a-812e-55e88930d89c became leader | |
openshift-route-controller-manager |
multus |
route-controller-manager-67866594b6-phrd7 |
AddedInterface |
Add eth0 [10.129.0.51/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-console |
multus |
console-5ff7f7597d-qc5rb |
AddedInterface |
Add eth0 [10.130.0.70/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-2ht6p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-console |
kubelet |
console-5ff7f7597d-qc5rb |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-r5t4v |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-j56tz |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_e4568c4f-1067-4329-8a4f-44845fe4cacd became leader | |
openshift-console |
multus |
console-5ff7f7597d-w7z9h |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-phrd7 |
Created |
Created container route-controller-manager | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-7fbb585d7c to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-7l2qd |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5f544c54d7 to 1 from 2 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'",Available changed from Unknown to False ("RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulDelete |
Deleted pod: controller-manager-5f544c54d7-7l2qd | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-795448867c to 2 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-7fbb585d7c |
SuccessfulDelete |
Deleted pod: oauth-openshift-7fbb585d7c-2g9nh | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-phrd7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine | |
openshift-kube-apiserver |
static-pod-installer |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulCreate |
Created pod: controller-manager-795448867c-z7dl4 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-phrd7 |
Started |
Started container route-controller-manager | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-67d88f768b to 1 from 0 | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-67866594b6-phrd7_54cebae7-0b99-4f2e-a57b-111372315dde became leader | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulCreate |
Created pod: oauth-openshift-67d88f768b-dqtrl | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-bmvdq |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-2ht6p |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-2ht6p |
Created |
Created container controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-rcc6x |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulDelete |
Deleted pod: route-controller-manager-d8db88b9d-rcc6x | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulCreate |
Created pod: route-controller-manager-67866594b6-2zw6c | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-d8db88b9d to 1 from 2 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67866594b6 to 2 from 1 | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container thanos-query | |
openshift-controller-manager |
multus |
controller-manager-795448867c-z7dl4 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" in 18.773s (18.773s including waiting). Image size: 451216469 bytes. | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container thanos-query | |
openshift-etcd |
static-pod-installer |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 12 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" in 21.082s (21.082s including waiting). Image size: 506290318 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" in 18.012s (18.012s including waiting). Image size: 449422523 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container etcdctl | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container alertmanager | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container etcd-readyz | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container kube-rbac-proxy-web | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5b488fc55 to 2 | |
openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
Created |
Created container console | |
openshift-console |
replicaset-controller |
console-5b488fc55 |
SuccessfulCreate |
Created pod: console-5b488fc55-tkp84 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5f544c54d7 |
SuccessfulDelete |
Deleted pod: controller-manager-5f544c54d7-vlsx8 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-795448867c to 3 from 2 | |
openshift-route-controller-manager |
multus |
route-controller-manager-67866594b6-2zw6c |
AddedInterface |
Add eth0 [10.130.0.71/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5ff7f7597d-qc5rb |
Created |
Created container console | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" | |
openshift-console |
replicaset-controller |
console-5b488fc55 |
SuccessfulCreate |
Created pod: console-5b488fc55-q5rrh | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5ff7f7597d to 1 from 2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-z7dl4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-console |
replicaset-controller |
console-5ff7f7597d |
SuccessfulDelete |
Deleted pod: console-5ff7f7597d-qc5rb | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-z7dl4 |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-z7dl4 |
Started |
Started container controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
Started |
Started container console | |
openshift-console |
kubelet |
console-5ff7f7597d-qc5rb |
Started |
Started container console | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5f544c54d7 to 0 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-5f544c54d7-vlsx8 |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulCreate |
Created pod: controller-manager-795448867c-wt2cs | |
openshift-console |
kubelet |
console-5ff7f7597d-qc5rb |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" in 4.467s (4.467s including waiting). Image size: 642298227 bytes. | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-5f544c54d7-vlsx8 stopped leading | |
openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" in 4.834s (4.834s including waiting). Image size: 642298227 bytes. | |
openshift-authentication |
multus |
oauth-openshift-67d88f768b-dqtrl |
AddedInterface |
Add eth0 [10.129.0.52/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5b488fc55-q5rrh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" | |
openshift-console |
multus |
console-5b488fc55-q5rrh |
AddedInterface |
Add eth0 [10.129.0.53/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-wt2cs |
Created |
Created container controller-manager | |
openshift-console |
kubelet |
console-5ff7f7597d-qc5rb |
Killing |
Stopping container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected"),Available message changed from "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" in 2.55s (2.55s including waiting). Image size: 392884958 bytes. | |
openshift-controller-manager |
multus |
controller-manager-795448867c-wt2cs |
AddedInterface |
Add eth0 [10.130.0.72/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 8",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 8" | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-wt2cs |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-wt2cs |
Started |
Started container controller-manager | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" in 3.103s (3.103s including waiting). Image size: 392884958 bytes. | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.54/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container kube-rbac-proxy-rules | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Created |
Created container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-546ff6759f-58hxn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/thanos-querier-pdb -n openshift-monitoring because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing | |
openshift-console |
kubelet |
console-5b488fc55-q5rrh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" in 5.181s (5.181s including waiting). Image size: 642298227 bytes. | |
openshift-console |
kubelet |
console-5b488fc55-q5rrh |
Created |
Created container console | |
openshift-console |
kubelet |
console-5b488fc55-q5rrh |
Started |
Started container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.73/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
multus |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-8-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" to "Progressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Readiness probe error: Get "https://10.128.2.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Readiness probe failed: Get "https://10.128.2.14:10250/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) | |
openshift-apiserver |
multus |
apiserver-6d6946f85d-wdq7x |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Created |
Created container fix-audit-permissions | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.55/23] from ovn-kubernetes | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Created |
Created container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Started |
Started container openshift-apiserver |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Started |
Started container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Created |
Created container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.52:6443/healthz": read tcp 10.129.0.2:57572->10.129.0.52:6443: read: connection reset by peer | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
ProbeError |
Readiness probe error: Get "https://10.129.0.52:6443/healthz": read tcp 10.129.0.2:57572->10.129.0.52:6443: read: connection reset by peer body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 9 triggered by "required configmap/config has changed" | |
openshift-console |
kubelet |
console-5b488fc55-tkp84 |
Created |
Created container console | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-console |
multus |
console-5b488fc55-tkp84 |
AddedInterface |
Add eth0 [10.130.0.74/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5b488fc55-tkp84 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-console |
kubelet |
console-5b488fc55-tkp84 |
Started |
Started container console | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Readiness probe error: Get "https://10.128.2.14:10250/readyz": read tcp 10.128.2.2:54988->10.128.2.14:10250: read: connection reset by peer body: | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Readiness probe failed: Get "https://10.128.2.14:10250/readyz": read tcp 10.128.2.2:54988->10.128.2.14:10250: read: connection reset by peer | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Killing |
Container metrics-server failed liveness probe, will be restarted | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcdctl | |
| (x5) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-6d6946f85d-wdq7x_openshift-apiserver(6b2f5ca1-0cde-492f-816b-bf23796e59e7) |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
Created |
Created container route-controller-manager |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
Started |
Started container route-controller-manager |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcdctl | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-6d6946f85d-wdq7x_openshift-apiserver(6b2f5ca1-0cde-492f-816b-bf23796e59e7) |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-readyz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.56/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-795448867c-2ht6p became leader | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.75/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 9" | |
openshift-kube-apiserver |
kubelet |
installer-8-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container installer | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Readiness probe error: Get "https://10.128.2.14:10250/readyz": dial tcp 10.128.2.14:10250: connect: connection refused body: | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Readiness probe failed: Get "https://10.128.2.14:10250/readyz": dial tcp 10.128.2.14:10250: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-9-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Failed |
Error: ErrImagePull | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: refixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I1024 13:15:40.605843 1 cmd.go:409] Getting controller reference for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.620880 1 cmd.go:422] Waiting for installer revisions to settle for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:15:40.621111 1 envvar.go:172] \"Feature gate default state\" feature=\"WatchListClient\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.621191 1 envvar.go:172] \"Feature gate default state\" feature=\"InformerResourceVersion\" enabled=false\nNodeInstallerDegraded: I1024 13:15:40.629965 1 cmd.go:514] Waiting additional period after revisions have settled for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: I1024 13:16:10.632243 1 cmd.go:520] Getting installer pods for node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: F1024 13:16:24.639340 1 cmd.go:105] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 6",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 6") | |
openshift-kube-controller-manager |
static-pod-installer |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 10 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 6 because static pod is ready | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Failed |
Error: ImagePullBackOff | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 6 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 static pod not found | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
| (x3) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Failed |
Error: ErrImagePull | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Failed |
Error: ImagePullBackOff | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
multus |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.76/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
| (x21) | openshift-ingress-operator |
kubelet |
ingress-operator-6b9fd98fb4-hksdp |
BackOff |
Back-off restarting failed container ingress-operator in pod ingress-operator-6b9fd98fb4-hksdp_openshift-ingress-operator(f964e6d9-4212-41f1-bbe9-f747b005e8e2) |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" already present on machine | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Failed |
Error: ErrImagePull | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Failed |
Error: ImagePullBackOff | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Failed |
Error: ErrImagePull | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Started |
Started container metrics-server |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Startup probe error: Get "https://10.0.0.3:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
| (x2) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Created |
Created container metrics-server |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
BackOff |
Back-off pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" | |
| (x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
Unhealthy |
Readiness probe failed: Get "https://10.130.0.71:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Failed |
Error: ImagePullBackOff | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Failed |
Failed to pull image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b": rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Readiness probe failed: Get "https://10.128.2.14:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Readiness probe error: Get "https://10.128.2.14:10250/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
| (x4) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
Created |
Created container oauth-openshift |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
Started |
Started container oauth-openshift |
| (x4) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.57/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-9-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
| (x2) | openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" in 1.023s (1.023s including waiting). Image size: 2876420716 bytes. | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Created |
Created container download-server | |
openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Started |
Started container download-server | |
| (x2) | openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" |
| (x2) | openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
ProbeError |
Readiness probe error: Get "http://10.131.0.13:8080/": dial tcp 10.131.0.13:8080: connect: connection refused body: |
| (x2) | openshift-console |
kubelet |
downloads-6957cb85f9-5bf4w |
Unhealthy |
Readiness probe failed: Get "http://10.131.0.13:8080/": dial tcp 10.131.0.13:8080: connect: connection refused |
| (x10) | openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
ProbeError |
Readiness probe error: Get "https://10.130.0.71:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Liveness probe failed: Get "https://10.128.2.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" in 2.192s (2.192s including waiting). Image size: 2566441596 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Started |
Started container monitoring-plugin | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-phx4h |
Created |
Created container monitoring-plugin | |
| (x2) | openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" |
| (x2) | openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:595046ed82273419149bd1ee64552a786cf93d945976f7d40a86fdeb31d3dff8" in 971ms (971ms including waiting). Image size: 2566441596 bytes. | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Created |
Created container download-server | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Created |
Created container monitoring-plugin | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:08a5b52d3b627c4ac4dbc720eee3f41ca43af342202c3c5c53a3bc1b203c585b" in 971ms (971ms including waiting). Image size: 2876420716 bytes. | |
openshift-monitoring |
kubelet |
monitoring-plugin-7fbc5f4744-kz226 |
Started |
Started container monitoring-plugin | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Started |
Started container download-server | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/monitoring-plugin -n openshift-monitoring because it was missing | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Unhealthy |
Liveness probe failed: Get "http://10.128.2.11:8080/": dial tcp 10.128.2.11:8080: connect: connection refused | |
openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
ProbeError |
Liveness probe error: Get "http://10.128.2.11:8080/": dial tcp 10.128.2.11:8080: connect: connection refused body: | |
| (x3) | openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
Unhealthy |
Readiness probe failed: Get "http://10.128.2.11:8080/": dial tcp 10.128.2.11:8080: connect: connection refused |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
| (x3) | openshift-console |
kubelet |
downloads-6957cb85f9-qq2b2 |
ProbeError |
Readiness probe error: Get "http://10.128.2.11:8080/": dial tcp 10.128.2.11:8080: connect: connection refused body: |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-machine-config-operator/configmaps/kubeconfig-data": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Liveness probe error: Get "https://10.128.2.14:10250/livez": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x8) | openshift-machine-config-operator |
kubelet |
kube-rbac-proxy-crio-ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
ProbeError |
Liveness probe error: Get "https://10.128.0.26:8443/healthz": read tcp 10.128.0.2:55850->10.128.0.26:8443: read: connection reset by peer body: | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.26:8443/healthz": read tcp 10.128.0.2:55850->10.128.0.26:8443: read: connection reset by peer | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
ProbeError |
Readiness probe error: Get "https://10.128.0.26:8443/healthz": read tcp 10.128.0.2:55866->10.128.0.26:8443: read: connection reset by peer body: | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.26:8443/healthz": read tcp 10.128.0.2:55866->10.128.0.26:8443: read: connection reset by peer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
BackOff |
Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0_openshift-kube-controller-manager(43cdf124672df45443fab70d56ac4de9) |
| (x8) | openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
| (x8) | openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container cluster-policy-controller |
| (x8) | openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
FailedMount |
MountVolume.SetUp failed for volume "cert" : secret "canary-serving-cert" not found |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container cluster-policy-controller |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.6:10357/healthz": read tcp 10.0.0.6:52000->10.0.0.6:10357: read: connection reset by peer body: | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.6:10357/healthz": read tcp 10.0.0.6:52000->10.0.0.6:10357: read: connection reset by peer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Container cluster-policy-controller failed startup probe, will be restarted | |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
BackOff |
Back-off restarting failed container route-controller-manager in pod route-controller-manager-d8db88b9d-58sj4_openshift-route-controller-manager(6dbbc689-f884-447e-85e8-4961e64db944) |
| (x11) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
BackOff |
Back-off restarting failed container oauth-openshift in pod oauth-openshift-67d88f768b-dqtrl_openshift-authentication(cd6f5fb6-f5f9-4cda-8467-24f29c5e5a1f) |
| (x10) | openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
Unhealthy |
Startup probe failed: Get "https://10.128.0.52:8443/health": dial tcp 10.128.0.52:8443: connect: connection refused |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Startup probe failed: Get "https://10.0.0.6:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Started |
Started container route-controller-manager |
| (x3) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Created |
Created container route-controller-manager |
| (x10) | openshift-console |
kubelet |
console-5b488fc55-q5rrh |
Unhealthy |
Startup probe failed: Get "https://10.129.0.53:8443/health": dial tcp 10.129.0.53:8443: connect: connection refused |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
ProbeError |
Readiness probe error: Get "https://10.128.0.26:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x6) | openshift-route-controller-manager |
kubelet |
route-controller-manager-d8db88b9d-58sj4 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.26:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
| (x11) | openshift-console |
kubelet |
console-5ff7f7597d-w7z9h |
ProbeError |
Startup probe error: Get "https://10.128.0.52:8443/health": dial tcp 10.128.0.52:8443: connect: connection refused body: |
| (x4) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.6:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: |
| (x13) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
| (x11) | openshift-console |
kubelet |
console-5b488fc55-q5rrh |
ProbeError |
Startup probe error: Get "https://10.129.0.53:8443/health": dial tcp 10.129.0.53:8443: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_1ae50c01-dc0e-4abb-bb5c-51000d036055 became leader | |
openshift-ingress-operator |
cluster-ingress-operator |
ingress-operator |
FeatureGatesInitialized |
FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AWSEFSDriverVolumeMetrics", "AdminNetworkPolicy", "AlibabaPlatform", "AzureWorkloadIdentity", "BareMetalLoadBalancer", "BuildCSIVolumes", "ChunkSizeMiB", "CloudDualStackNodeIPs", "DisableKubeletCloudCredentialProviders", "GCPLabelsTags", "HardwareSpeed", "IngressControllerLBSubnetsAWS", "KMSv1", "ManagedBootImages", "MetricsServer", "MultiArchInstallAWS", "MultiArchInstallGCP", "NetworkDiagnosticsConfig", "NetworkLiveMigration", "NodeDisruptionPolicy", "OpenShiftPodSecurityAdmission", "PrivateHostedZoneAWS", "SetEIPForNLBIngressController", "VSphereControlPlaneMachineSet", "VSphereDriverConfiguration", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}, Disabled:[]v1.FeatureGateName{"AWSClusterHostedDNS", "AdditionalRoutingCapabilities", "AutomatedEtcdBackup", "BootcNodeManagement", "CSIDriverSharedResource", "ClusterAPIInstall", "ClusterAPIInstallIBMCloud", "ClusterMonitoringConfig", "DNSNameResolver", "DynamicResourceAllocation", "EtcdBackendQuota", "EventedPLEG", "Example", "ExternalOIDC", "GCPClusterHostedDNS", "GatewayAPI", "ImageStreamImportMode", "IngressControllerDynamicConfigurationManager", "InsightsConfig", "InsightsConfigAPI", "InsightsOnDemandDataGather", "InsightsRuntimeExtractor", "MachineAPIMigration", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MachineConfigNodes", "ManagedBootImagesAWS", "MaxUnavailableStatefulSet", "MetricsCollectionProfiles", "MixedCPUsAllocation", "MultiArchInstallAzure", "NetworkSegmentation", "NewOLM", "NodeSwap", "OVNObservability", "OnClusterBuild", "PersistentIPsForVirtualization", "PinnedImages", "PlatformOperators", "ProcMountType", "RouteAdvertisements", "RouteExternalCertificate", "ServiceAccountTokenNodeBinding", "SignatureStores", "SigstoreImageVerification", "TranslateStreamCloseWebsocketRequests", "UpgradeStatus", "UserNamespacesPodSecurityStandards", "UserNamespacesSupport", "VSphereMultiNetworks", "VSphereMultiVCenters", "VolumeGroupSnapshot"}} | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-0_a16c2283-0c51-4be9-8bba-fa0d07e8aadf |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_a16c2283-0c51-4be9-8bba-fa0d07e8aadf became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_66b15279-5150-49e5-91cd-6bd0b71abf57 became leader | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
| (x11) | openshift-console |
kubelet |
console-5b488fc55-tkp84 |
ProbeError |
Startup probe error: Get "https://10.130.0.74:8443/health": dial tcp 10.130.0.74:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-5b488fc55-tkp84 |
Unhealthy |
Startup probe failed: Get "https://10.130.0.74:8443/health": dial tcp 10.130.0.74:8443: connect: connection refused |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulCreate |
Created pod: route-controller-manager-67866594b6-m5fxg | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-d8db88b9d to 0 from 1 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-67866594b6 to 3 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-d8db88b9d |
SuccessfulDelete |
Deleted pod: route-controller-manager-d8db88b9d-58sj4 | |
| (x5) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Created |
Created container marketplace-operator |
| (x4) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:b2ea969540674dde10fe1aaddf9a7608b26256f5f939a55455c44523ca0a73e4" already present on machine |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-m5fxg |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-m5fxg |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-m5fxg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine | |
openshift-route-controller-manager |
multus |
route-controller-manager-67866594b6-m5fxg |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from False to True ("RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorCRDegraded: All is well" to "GCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"csidriver.yaml\" (string): Get \"https://172.30.0.1:443/apis/storage.k8s.io/v1/csidrivers/pd.csi.storage.gke.io\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"controller_sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/serviceaccounts/gcp-pd-csi-driver-controller-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"controller_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-csi-drivers/poddisruptionbudgets/gcp-pd-csi-driver-controller-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"node_sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/serviceaccounts/gcp-pd-csi-driver-node-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"service.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/services/gcp-pd-csi-driver-controller-metrics\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_attacher_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-attacher-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/privileged_role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/gcp-pd-privileged-role\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/controller_privileged_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-controller-privileged-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/node_privileged_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-node-privileged-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_provisioner_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-provisioner-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/volumesnapshot_reader_provisioner_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-volumesnapshot-reader-provisioner-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_resizer_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-resizer-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/storageclass_reader_resizer_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-storageclass-reader-resizer-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_snapshotter_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-snapshotter-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: " | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5ff7f7597d to 0 from 1 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5b488fc55 to 1 from 2 | |
openshift-console |
replicaset-controller |
console-7ccb568577 |
SuccessfulCreate |
Created pod: console-7ccb568577-lmz2w | |
openshift-console |
replicaset-controller |
console-7ccb568577 |
SuccessfulCreate |
Created pod: console-7ccb568577-xpft2 | |
openshift-console |
replicaset-controller |
console-5ff7f7597d |
SuccessfulDelete |
Deleted pod: console-5ff7f7597d-w7z9h | |
openshift-console |
replicaset-controller |
console-5b488fc55 |
SuccessfulDelete |
Deleted pod: console-5b488fc55-tkp84 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-7ccb568577 to 2 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:23:53.148987 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:03.148942 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:13.149702 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:23.148978 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:33.149148 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:33.150178 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:24:33.150207 1 cmd.go:105] timed out waiting for the condition |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 10 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 1 node is at revision 3; 1 node is at revision 4; 1 node is at revision 10",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 3; 1 node is at revision 4; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 4; 1 node is at revision 10" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 3 to 10 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 3 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 3 to 12 because static pod is ready | |
| (x5) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
multus |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.58/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-67d88f768b to 2 from 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-786d8fdc94 to 1 from 2 | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-dz6zh |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulDelete |
Deleted pod: oauth-openshift-786d8fdc94-dz6zh | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulCreate |
Created pod: oauth-openshift-67d88f768b-zbp7v | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "CustomRouteControllerDegraded: an error on the server (\"Internal Server Error: \\\"/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift\\\": Post \\\"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\\\": dial tcp 172.30.0.1:443: connect: connection refused\") has prevented the request from succeeding (get routes.route.openshift.io oauth-openshift)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:53.148987 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:03.148942 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:13.149702 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:23.148978 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:33.149148 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:33.150178 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:33.150207 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 5 to 12 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 5 is the oldest | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CreatedSCCRanges |
created SCC ranges for openshift-network-console namespace | |
openshift-console |
replicaset-controller |
console-54d86f69c8 |
SuccessfulCreate |
Created pod: console-54d86f69c8-zdmgs | |
openshift-network-console |
deployment-controller |
networking-console-plugin |
ScalingReplicaSet |
Scaled up replica set networking-console-plugin-5cd86b96f5 to 2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "CustomRouteControllerDegraded: an error on the server (\"Internal Server Error: \\\"/apis/route.openshift.io/v1/namespaces/openshift-authentication/routes/oauth-openshift\\\": Post \\\"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\\\": dial tcp 172.30.0.1:443: connect: connection refused\") has prevented the request from succeeding (get routes.route.openshift.io oauth-openshift)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-5cd86b96f5 |
SuccessfulCreate |
Created pod: networking-console-plugin-5cd86b96f5-4mr48 | |
openshift-console |
replicaset-controller |
console-5b488fc55 |
SuccessfulDelete |
Deleted pod: console-5b488fc55-q5rrh | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-7ccb568577 to 1 from 2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-54d86f69c8 to 2 | |
openshift-console |
replicaset-controller |
console-7ccb568577 |
SuccessfulDelete |
Deleted pod: console-7ccb568577-lmz2w | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5b488fc55 to 0 from 1 | |
openshift-network-console |
replicaset-controller |
networking-console-plugin-5cd86b96f5 |
SuccessfulCreate |
Created pod: networking-console-plugin-5cd86b96f5-dh6vw | |
openshift-console |
replicaset-controller |
console-54d86f69c8 |
SuccessfulCreate |
Created pod: console-54d86f69c8-k5dnq | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
FailedMount |
MountVolume.SetUp failed for volume "networking-console-plugin-cert" : secret "networking-console-plugin-cert" not found | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 0 replicas available" to "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected",status.relatedObjects changed from [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] to [{"console.openshift.io" "consoleplugins" "" "monitoring-plugin"} {"console.openshift.io" "consoleplugins" "" "networking-console-plugin"} {"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}] | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"25714\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 13, 13, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc000664e28), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
| (x2) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 0 replicas available" |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulCreate |
Created pod: apiserver-6d6946f85d-8v797 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d6946f85d to 2 from 1 | |
openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-5d5579f445 to 0 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-5d5579f445 |
SuccessfulDelete |
Deleted pod: apiserver-5d5579f445-5twj5 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-dh6vw_openshift-network-console_6c63032a-fda9-4676-bf46-e9f2c4c0ed34_0(984c0a3a3f3fcee0a7ebef0d628b39d0febccf12513f90911d355cc9de1db69f): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-dh6vw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"984c0a3a3f3fcee0a7ebef0d628b39d0febccf12513f90911d355cc9de1db69f" Netns:"/var/run/netns/021ef39d-5791-4de2-ad6e-5d6eaf51c416" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-dh6vw;K8S_POD_INFRA_CONTAINER_ID=984c0a3a3f3fcee0a7ebef0d628b39d0febccf12513f90911d355cc9de1db69f;K8S_POD_UID=6c63032a-fda9-4676-bf46-e9f2c4c0ed34" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw/6c63032a-fda9-4676-bf46-e9f2c4c0ed34]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-dh6vw in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-dh6vw" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
multus |
installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.59/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-console |
kubelet |
console-7ccb568577-xpft2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-console |
multus |
console-7ccb568577-xpft2 |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-etcd |
multus |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.77/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-dh6vw_openshift-network-console_6c63032a-fda9-4676-bf46-e9f2c4c0ed34_0(e195998c1367d9c40a7fae70fbf06aedd6c4568bc6866a26629033c0f717fd7f): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-dh6vw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e195998c1367d9c40a7fae70fbf06aedd6c4568bc6866a26629033c0f717fd7f" Netns:"/var/run/netns/999333a9-821d-46d1-8f5f-55fe97de1250" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-dh6vw;K8S_POD_INFRA_CONTAINER_ID=e195998c1367d9c40a7fae70fbf06aedd6c4568bc6866a26629033c0f717fd7f;K8S_POD_UID=6c63032a-fda9-4676-bf46-e9f2c4c0ed34" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw/6c63032a-fda9-4676-bf46-e9f2c4c0ed34]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-dh6vw in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-dh6vw" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-console |
kubelet |
console-7ccb568577-xpft2 |
Created |
Created container console | |
openshift-console |
kubelet |
console-7ccb568577-xpft2 |
Started |
Started container console | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-4mr48_openshift-network-console_12d67959-0d4f-4beb-b36c-78f41974f8f4_0(60c469e03723e5f2a2cb7c52414ecee7ae4f367c9611bc69840735a05b88d78d): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-4mr48 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60c469e03723e5f2a2cb7c52414ecee7ae4f367c9611bc69840735a05b88d78d" Netns:"/var/run/netns/87cc5855-115b-416e-8a3a-2a4c9cf53460" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-4mr48;K8S_POD_INFRA_CONTAINER_ID=60c469e03723e5f2a2cb7c52414ecee7ae4f367c9611bc69840735a05b88d78d;K8S_POD_UID=12d67959-0d4f-4beb-b36c-78f41974f8f4" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48/12d67959-0d4f-4beb-b36c-78f41974f8f4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-4mr48 in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-4mr48" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
multus |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.78/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-console |
multus |
console-54d86f69c8-zdmgs |
AddedInterface |
Add eth0 [10.130.0.79/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
certified-operators-9fj4p |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
Created |
Created container console | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-4mr48_openshift-network-console_12d67959-0d4f-4beb-b36c-78f41974f8f4_0(aded3cbf1e4ebd5c114140606b18e311d8e72818b2cf46098f34692533027669): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-4mr48 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aded3cbf1e4ebd5c114140606b18e311d8e72818b2cf46098f34692533027669" Netns:"/var/run/netns/35edcc16-50e0-48fe-81bc-380fd84750a2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-4mr48;K8S_POD_INFRA_CONTAINER_ID=aded3cbf1e4ebd5c114140606b18e311d8e72818b2cf46098f34692533027669;K8S_POD_UID=12d67959-0d4f-4beb-b36c-78f41974f8f4" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48/12d67959-0d4f-4beb-b36c-78f41974f8f4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-4mr48 in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-4mr48" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
Started |
Started container console | |
openshift-marketplace |
multus |
community-operators-qbjs8 |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.653s (1.653s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Created |
Created container registry-server | |
openshift-ingress-canary |
multus |
ingress-canary-9wwh9 |
AddedInterface |
Add eth0 [10.131.0.8/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Started |
Started container registry-server | |
openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" | |
openshift-ingress-canary |
multus |
ingress-canary-hhkt7 |
AddedInterface |
Add eth0 [10.129.2.10/23] from ovn-kubernetes | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" | |
openshift-ingress-canary |
multus |
ingress-canary-lmjwh |
AddedInterface |
Add eth0 [10.128.2.8/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 943ms (943ms including waiting). Image size: 896974229 bytes. | |
openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 3.757s (3.757s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" in 2.885s (2.885s including waiting). Image size: 484196808 bytes. | |
openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-lmjwh |
Started |
Started container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
Started |
Started container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-hhkt7 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" in 2.811s (2.811s including waiting). Image size: 484196808 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 906ms (906ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Created |
Created container registry-server | |
openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
Started |
Started container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-9wwh9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:36b50eb42ae863d2543c25649cad208b4caf51bca14ba1fb75b36e7c278b61e0" in 3.066s (3.066s including waiting). Image size: 484196808 bytes. | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded message changed from "GCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"csidriver.yaml\" (string): Get \"https://172.30.0.1:443/apis/storage.k8s.io/v1/csidrivers/pd.csi.storage.gke.io\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"controller_sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/serviceaccounts/gcp-pd-csi-driver-controller-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"controller_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-csi-drivers/poddisruptionbudgets/gcp-pd-csi-driver-controller-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"node_sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/serviceaccounts/gcp-pd-csi-driver-node-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"service.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-csi-drivers/services/gcp-pd-csi-driver-controller-metrics\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_attacher_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-attacher-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/privileged_role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/gcp-pd-privileged-role\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/controller_privileged_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-controller-privileged-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/node_privileged_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-node-privileged-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_provisioner_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-provisioner-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/volumesnapshot_reader_provisioner_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-volumesnapshot-reader-provisioner-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_resizer_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-resizer-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/storageclass_reader_resizer_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-storageclass-reader-resizer-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: \"rbac/main_snapshotter_binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/gcp-pd-csi-main-snapshotter-binding\": dial tcp 172.30.0.1:443: connect: connection refused\nGCPPDCSIDriverOperatorCRDegraded: GCPPDDriverStaticResourcesControllerDegraded: " to "GCPPDCSIDriverOperatorCRDegraded: All is well" | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-zbp7v |
Started |
Started container oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-67d88f768b-zbp7v |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-zbp7v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-zbp7v |
Created |
Created container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-67d88f768b to 3 from 2 | |
openshift-authentication |
kubelet |
oauth-openshift-786d8fdc94-k6wq9 |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-786d8fdc94 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulCreate |
Created pod: oauth-openshift-67d88f768b-wblgk | |
openshift-authentication |
replicaset-controller |
oauth-openshift-786d8fdc94 |
SuccessfulDelete |
Deleted pod: oauth-openshift-786d8fdc94-k6wq9 | |
openshift-marketplace |
kubelet |
certified-operators-9fj4p |
Killing |
Stopping container registry-server | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_c0e1c13e-68ca-48e2-81e0-4e314fa9f2fe stopped leading | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_66b15279-5150-49e5-91cd-6bd0b71abf57 stopped leading | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-dh6vw_openshift-network-console_6c63032a-fda9-4676-bf46-e9f2c4c0ed34_0(f6ab86961c61e674a0138c6755f3f1073086a1d0aced024585d9f110bd4a77dd): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-dh6vw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f6ab86961c61e674a0138c6755f3f1073086a1d0aced024585d9f110bd4a77dd" Netns:"/var/run/netns/abef234f-4a7f-4def-90dd-3a8c2200b2c2" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-dh6vw;K8S_POD_INFRA_CONTAINER_ID=f6ab86961c61e674a0138c6755f3f1073086a1d0aced024585d9f110bd4a77dd;K8S_POD_UID=6c63032a-fda9-4676-bf46-e9f2c4c0ed34" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw/6c63032a-fda9-4676-bf46-e9f2c4c0ed34]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-dh6vw in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-dh6vw" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-controller-manager |
static-pod-installer |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 10 | |
openshift-console |
multus |
console-54d86f69c8-k5dnq |
AddedInterface |
Add eth0 [10.129.0.60/23] from ovn-kubernetes | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-4mr48_openshift-network-console_12d67959-0d4f-4beb-b36c-78f41974f8f4_0(ce7951abe2e535d1f4aad0248ed86a0bba165be9b5647e429570349c937dfc25): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-4mr48 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ce7951abe2e535d1f4aad0248ed86a0bba165be9b5647e429570349c937dfc25" Netns:"/var/run/netns/458e163e-b367-469d-a7b7-cf9cc3557b3b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-4mr48;K8S_POD_INFRA_CONTAINER_ID=ce7951abe2e535d1f4aad0248ed86a0bba165be9b5647e429570349c937dfc25;K8S_POD_UID=12d67959-0d4f-4beb-b36c-78f41974f8f4" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48/12d67959-0d4f-4beb-b36c-78f41974f8f4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-4mr48 in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-4mr48" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-console |
kubelet |
console-54d86f69c8-k5dnq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" | |
openshift-console |
kubelet |
console-54d86f69c8-k5dnq |
Created |
Created container console | |
openshift-console |
kubelet |
console-54d86f69c8-k5dnq |
Started |
Started container console | |
openshift-marketplace |
kubelet |
community-operators-qbjs8 |
Killing |
Stopping container registry-server | |
| (x13) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.3:10257/healthz": dial tcp 10.0.0.3:10257: connect: connection refused |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container etcd-rev | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-etcd |
static-pod-installer |
installer-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 12 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container etcd-readyz | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container etcdctl | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_b3bfd7af-d496-407b-9e34-961e3e3111bf became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container wait-for-host-port | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_54c19673-10f4-43c7-b52b-ae4675b9f2b7 became leader | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-dh6vw_openshift-network-console_6c63032a-fda9-4676-bf46-e9f2c4c0ed34_0(d561da67b9e22d829649b73b1cbf15d0b029f7d385e015469d1667dfc192b0df): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-dh6vw to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d561da67b9e22d829649b73b1cbf15d0b029f7d385e015469d1667dfc192b0df" Netns:"/var/run/netns/7b8a4104-8ab8-4f82-b4b6-29387b9a5910" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-dh6vw;K8S_POD_INFRA_CONTAINER_ID=d561da67b9e22d829649b73b1cbf15d0b029f7d385e015469d1667dfc192b0df;K8S_POD_UID=6c63032a-fda9-4676-bf46-e9f2c4c0ed34" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-dh6vw/6c63032a-fda9-4676-bf46-e9f2c4c0ed34]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-dh6vw in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-dh6vw" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_networking-console-plugin-5cd86b96f5-4mr48_openshift-network-console_12d67959-0d4f-4beb-b36c-78f41974f8f4_0(5aef6f2dedc873fb7148671ad05d618b8b9edc0e0f0bbf75c5c0354346979c2e): error adding pod openshift-network-console_networking-console-plugin-5cd86b96f5-4mr48 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5aef6f2dedc873fb7148671ad05d618b8b9edc0e0f0bbf75c5c0354346979c2e" Netns:"/var/run/netns/aa4b3072-fa82-4e6c-8e4e-51368e33ed32" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-network-console;K8S_POD_NAME=networking-console-plugin-5cd86b96f5-4mr48;K8S_POD_INFRA_CONTAINER_ID=5aef6f2dedc873fb7148671ad05d618b8b9edc0e0f0bbf75c5c0354346979c2e;K8S_POD_UID=12d67959-0d4f-4beb-b36c-78f41974f8f4" Path:"" ERRORED: error configuring pod [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48] networking: Multus: [openshift-network-console/networking-console-plugin-5cd86b96f5-4mr48/12d67959-0d4f-4beb-b36c-78f41974f8f4]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod networking-console-plugin-5cd86b96f5-4mr48 in out of cluster comm: pod "networking-console-plugin-5cd86b96f5-4mr48" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_af90799b-e9f8-45ff-a1cf-b957a2b40e0f became leader | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
StartingNewRevision |
new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
static-pod-installer |
installer-9-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 9 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
StartingNewRevision |
new revision 7 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "",Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.0.3:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 10 triggered by "required secret/localhost-recovery-client-token has changed" | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-7 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container guard | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.80/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-7 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-11 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-11 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication |
multus |
oauth-openshift-67d88f768b-wblgk |
AddedInterface |
Add eth0 [10.130.0.81/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-scheduler because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5922faf77c005dda67080ab24a6f4c91177a822c378520fe7738138fc9dce3a8" | |
| (x5) | openshift-network-console |
multus |
networking-console-plugin-5cd86b96f5-dh6vw |
AddedInterface |
Add eth0 [10.129.2.13/23] from ovn-kubernetes |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-7 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x5) | openshift-network-console |
multus |
networking-console-plugin-5cd86b96f5-4mr48 |
AddedInterface |
Add eth0 [10.131.0.16/23] from ovn-kubernetes |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 3; 1 node is at revision 4; 1 node is at revision 10" to "NodeInstallerProgressing: 1 node is at revision 4; 2 nodes are at revision 10",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 3; 1 node is at revision 4; 1 node is at revision 10" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 4; 2 nodes are at revision 10" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 3 to 10 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-11 -n openshift-kube-controller-manager because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5922faf77c005dda67080ab24a6f4c91177a822c378520fe7738138fc9dce3a8" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-7 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-7 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-10 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-11 -n openshift-kube-controller-manager because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5922faf77c005dda67080ab24a6f4c91177a822c378520fe7738138fc9dce3a8" in 3.849s (3.849s including waiting). Image size: 366440567 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-10 -n openshift-kube-apiserver because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
Started |
Started container networking-console-plugin | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-dh6vw |
Created |
Created container networking-console-plugin | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it changed | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
Started |
Started container networking-console-plugin | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-11 -n openshift-kube-controller-manager because it was missing | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:5922faf77c005dda67080ab24a6f4c91177a822c378520fe7738138fc9dce3a8" in 3.863s (3.863s including waiting). Image size: 366440567 bytes. | |
openshift-network-console |
kubelet |
networking-console-plugin-5cd86b96f5-4mr48 |
Created |
Created container networking-console-plugin | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-7 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 7 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 6; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 6" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 6; 0 nodes have achieved new revision 7" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-11 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 11 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 6:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:45.364366 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:55.363740 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:05.364030 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:15.364193 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.364413 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:25.365130 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:25.365175 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 4 to 10 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 4 is the oldest | |
openshift-kube-scheduler |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.61/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 2 nodes are at revision 10" to "NodeInstallerProgressing: 1 node is at revision 4; 2 nodes are at revision 10; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 4; 2 nodes are at revision 10" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 4; 2 nodes are at revision 10; 0 nodes have achieved new revision 11" | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
multus |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.82/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.83/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/metrics-server -n openshift-monitoring because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-10 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
APIServiceCreated |
Created APIService.apiregistration.k8s.io/v1beta1.metrics.k8s.io because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-10 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.84/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
multus |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-scheduler |
kubelet |
revision-pruner-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-10 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 10 triggered by "required secret/localhost-recovery-client-token has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x3) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.62/23] from ovn-kubernetes | |
| (x5) | openshift-apiserver |
kubelet |
apiserver-5d5579f445-5twj5 |
ProbeError |
Readiness probe error: Get "https://10.129.0.29:8443/readyz": dial tcp 10.129.0.29:8443: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:23:53.148987 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:03.148942 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:13.149702 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:23.148978 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:33.149148 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:24:33.150178 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:24:33.150207 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 10",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 10" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.85/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/alertmanagerconfigs.openshift.io because it changed | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/prometheusrules.openshift.io because it changed | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcdctl | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is crashed in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (crashlooping container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-10-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Killing |
Stopping container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Killing |
Stopping container prom-label-proxy | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-1 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/alertmanager-prometheusk8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s-thanos-sidecar -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ServiceCreated |
Created Service/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-1 |
AddedInterface |
Add eth0 [10.129.2.14/23] from ovn-kubernetes | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s-config -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n default because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n kube-system because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-user-workload-monitoring because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-2fcpj5j6-f6035-2lklf-master-1 is unhealthy" | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
| (x2) | openshift-monitoring |
controllermanager |
prometheus-k8s |
NoPods |
No matching pods found |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-additional-alertmanager-configs -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/prometheus-k8s -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/prometheus-k8s-grpc-tls-k8qrs2rebktr -n openshift-monitoring because it was missing | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/prometheus-trusted-ca-bundle -n openshift-monitoring because it was missing | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Claim prometheus-data-prometheus-k8s-0 Pod prometheus-k8s-0 in StatefulSet prometheus-k8s success | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-0 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openshift-monitoring |
pd.csi.storage.gke.io_ci-op-2fcpj5j6-f6035-2lklf-master-2_a399a6ff-8af1-4940-ad0a-9c83c35cf558 |
prometheus-data-prometheus-k8s-1 |
Provisioning |
External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-1" | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Claim prometheus-data-prometheus-k8s-1 Pod prometheus-k8s-1 in StatefulSet prometheus-k8s success | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful | |
| (x2) | openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-1 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'pd.csi.storage.gke.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-1 |
WaitForFirstConsumer |
waiting for first consumer to be created before binding | |
openshift-monitoring |
pd.csi.storage.gke.io_ci-op-2fcpj5j6-f6035-2lklf-master-2_a399a6ff-8af1-4940-ad0a-9c83c35cf558 |
prometheus-data-prometheus-k8s-0 |
Provisioning |
External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-0" | |
| (x2) | openshift-monitoring |
persistentvolume-controller |
prometheus-data-prometheus-k8s-0 |
ExternalProvisioning |
Waiting for a volume to be created either by the external provisioner 'pd.csi.storage.gke.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered. |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.63/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-monitoring |
pd.csi.storage.gke.io_ci-op-2fcpj5j6-f6035-2lklf-master-2_a399a6ff-8af1-4940-ad0a-9c83c35cf558 |
prometheus-data-prometheus-k8s-0 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-bebd8955-95af-4a81-ab58-31a7f16c6c0b | |
openshift-monitoring |
pd.csi.storage.gke.io_ci-op-2fcpj5j6-f6035-2lklf-master-2_a399a6ff-8af1-4940-ad0a-9c83c35cf558 |
prometheus-data-prometheus-k8s-1 |
ProvisioningSucceeded |
Successfully provisioned volume pvc-84fdde18-cf7b-44df-8289-0df5b52b32d2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (crashlooping container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX/healthz\": EOF\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (crashlooping container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 5 to 12 because static pod is ready | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.6:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://10.0.0.3:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://10.0.0.4:2379\"),\n-\u00a0\t\t\tstring(\"https://10.0.0.5:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://10.0.0.6:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "etcd-servers": []any{ string("https://10.0.0.3:2379"), string("https://10.0.0.4:2379"), - string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), string("https://localhost:2379"), }, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, ... // 3 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries } | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.6:2379,https://localhost:2379 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml | |
openshift-monitoring |
attachdetach-controller |
prometheus-k8s-0 |
SuccessfulAttachVolume |
AttachVolume.Attach succeeded for volume "pvc-bebd8955-95af-4a81-ab58-31a7f16c6c0b" | |
openshift-monitoring |
attachdetach-controller |
prometheus-k8s-1 |
SuccessfulAttachVolume |
AttachVolume.Attach succeeded for volume "pvc-84fdde18-cf7b-44df-8289-0df5b52b32d2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:918b653a1a4b5480759687e3f7bde98d8917d54d56dd1588bfd4b20f31866f3f" | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.2.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
multus |
prometheus-k8s-1 |
AddedInterface |
Add eth0 [10.129.2.15/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container init-config-reloader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5." | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:918b653a1a4b5480759687e3f7bde98d8917d54d56dd1588bfd4b20f31866f3f" | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulCreate |
Created pod: apiserver-6dcfd955f4-p5j6s | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6dcfd955f4 to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-8bdbc6bbb to 2 from 3 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulDelete |
Deleted pod: apiserver-8bdbc6bbb-8ndgb | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/restore-etcd-pod -n openshift-etcd: cause by changes in data.pod.yaml,data.quorum-restore-pod.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-11 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container prometheus | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:918b653a1a4b5480759687e3f7bde98d8917d54d56dd1588bfd4b20f31866f3f" in 4.625s (4.625s including waiting). Image size: 574330828 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:918b653a1a4b5480759687e3f7bde98d8917d54d56dd1588bfd4b20f31866f3f" in 4.273s (4.273s including waiting). Image size: 574330828 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container prometheus | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-11 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c793dd5bee91ac200f1286c5b1347506bf9b069890c3f206a2cf3fb9228f525c" in 2.969s (2.969s including waiting). Image size: 506290318 bytes. | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy-web | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-11 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-1 |
Created |
Created container kube-rbac-proxy | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/machine-config-controller": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/authorization.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/build.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x11) | openshift-console |
kubelet |
console-7ccb568577-xpft2 |
Unhealthy |
Startup probe failed: Get "https://10.128.0.58:8443/health": dial tcp 10.128.0.58:8443: connect: connection refused |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/image.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x11) | openshift-console |
kubelet |
console-7ccb568577-xpft2 |
ProbeError |
Startup probe error: Get "https://10.128.0.58:8443/health": dial tcp 10.128.0.58:8443: connect: connection refused body: |
| (x3) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-8ndgb |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/project.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/quota.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
| (x11) | openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
ProbeError |
Startup probe error: Get "https://10.130.0.79:8443/health": dial tcp 10.130.0.79:8443: connect: connection refused body: |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
| (x11) | openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
Unhealthy |
Startup probe failed: Get "https://10.130.0.79:8443/health": dial tcp 10.130.0.79:8443: connect: connection refused |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine |
| (x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager-cert-syncer |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
| (x5) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-wblgk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine |
| (x5) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-wblgk |
Created |
Created container oauth-openshift |
| (x5) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-wblgk |
Started |
Started container oauth-openshift |
| (x11) | openshift-console |
kubelet |
console-54d86f69c8-k5dnq |
ProbeError |
Startup probe error: Get "https://10.129.0.60:8443/health": dial tcp 10.129.0.60:8443: connect: connection refused body: |
| (x11) | openshift-console |
kubelet |
console-54d86f69c8-k5dnq |
Unhealthy |
Startup probe failed: Get "https://10.129.0.60:8443/health": dial tcp 10.129.0.60:8443: connect: connection refused |
| (x10) | openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-wblgk |
BackOff |
Back-off restarting failed container oauth-openshift in pod oauth-openshift-67d88f768b-wblgk_openshift-authentication(442f9bd1-945c-4905-b31b-90d88487a320) |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fjp8f" : failed to fetch token: Post "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 10.0.0.2:6443: i/o timeout | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
| (x13) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster": dial tcp 172.30.0.1:443: connect: connection refused |
| (x15) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 11: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
| (x13) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 10 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-ci-op-2fcpj5j6-f6035-2lklf-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Created |
Created container approver |
| (x3) | openshift-network-node-identity |
kubelet |
network-node-identity-qfbfs |
Started |
Started container approver |
| (x6) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-fjp8f" : failed to fetch token: Post "https://api-int.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/api/v1/namespaces/openshift-apiserver/serviceaccounts/openshift-apiserver-sa/token": dial tcp 10.0.0.2:6443: connect: connection refused |
| (x29) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x27) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
| (x28) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreateFailed |
Failed to create revision 13: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found] | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_4dbc38f7-ac0d-4868-9ceb-90c1dc3e1c7e became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 500, err = an error on the server ("Internal Server Error: \"/apis/apps.openshift.io/v1\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 172.30.0.1:443: connect: connection refused") has prevented the request from succeeding | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 500, err = an error on the server ("Internal Server Error: \"/apis/build.openshift.io/v1\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 172.30.0.1:443: connect: connection refused") has prevented the request from succeeding | |
| (x6) | openshift-marketplace |
kubelet |
marketplace-operator-7ddb67b76c-d2flk |
BackOff |
Back-off restarting failed container marketplace-operator in pod marketplace-operator-7ddb67b76c-d2flk_openshift-marketplace(488398fd-a023-4e8c-9a63-e0ccea7d75ac) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (crashlooping container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)") | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_13c6727e-5ad2-4dc3-87bf-e48107158f32 became leader | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulDelete |
delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
openshift-console |
replicaset-controller |
console-7ccb568577 |
SuccessfulDelete |
Deleted pod: console-7ccb568577-xpft2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-7ccb568577 to 0 from 1 | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
multus |
apiserver-6d6946f85d-8v797 |
AddedInterface |
Add eth0 [10.129.0.64/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 11 triggered by "required configmap/config has changed" | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-11 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-11 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-6dcfd955f4-p5j6s_openshift-oauth-apiserver_cee29ddd-289b-462c-8f80-b5ba35d11fd0_0(26ac4333a14228bc8ec74c4b82e56a297af6eef3f9f198f523f8e6c3c7c3c44a): error adding pod openshift-oauth-apiserver_apiserver-6dcfd955f4-p5j6s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"26ac4333a14228bc8ec74c4b82e56a297af6eef3f9f198f523f8e6c3c7c3c44a" Netns:"/var/run/netns/a762db4c-4e1b-40d6-be16-878479f1c94e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-6dcfd955f4-p5j6s;K8S_POD_INFRA_CONTAINER_ID=26ac4333a14228bc8ec74c4b82e56a297af6eef3f9f198f523f8e6c3c7c3c44a;K8S_POD_UID=cee29ddd-289b-462c-8f80-b5ba35d11fd0" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-6dcfd955f4-p5j6s] networking: Multus: [openshift-oauth-apiserver/apiserver-6dcfd955f4-p5j6s/cee29ddd-289b-462c-8f80-b5ba35d11fd0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod apiserver-6dcfd955f4-p5j6s in out of cluster comm: pod "apiserver-6dcfd955f4-p5j6s" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d6946f85d to 3 from 2 | |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulCreate |
Created pod: apiserver-6d6946f85d-6dknb | |
openshift-apiserver |
replicaset-controller |
apiserver-6d7dbc56c5 |
SuccessfulDelete |
Deleted pod: apiserver-6d7dbc56c5-jl6d4 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d7dbc56c5 to 0 from 1 | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-6dcfd955f4-p5j6s_openshift-oauth-apiserver_cee29ddd-289b-462c-8f80-b5ba35d11fd0_0(4b6f249e47ff1bd787568b697fcc7c78fc335c31d98b1736e7ea41ee88f63a1c): error adding pod openshift-oauth-apiserver_apiserver-6dcfd955f4-p5j6s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4b6f249e47ff1bd787568b697fcc7c78fc335c31d98b1736e7ea41ee88f63a1c" Netns:"/var/run/netns/b1e45762-4793-4102-b8f9-62102cd68d26" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-oauth-apiserver;K8S_POD_NAME=apiserver-6dcfd955f4-p5j6s;K8S_POD_INFRA_CONTAINER_ID=4b6f249e47ff1bd787568b697fcc7c78fc335c31d98b1736e7ea41ee88f63a1c;K8S_POD_UID=cee29ddd-289b-462c-8f80-b5ba35d11fd0" Path:"" ERRORED: error configuring pod [openshift-oauth-apiserver/apiserver-6dcfd955f4-p5j6s] networking: Multus: [openshift-oauth-apiserver/apiserver-6dcfd955f4-p5j6s/cee29ddd-289b-462c-8f80-b5ba35d11fd0]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod apiserver-6dcfd955f4-p5j6s in out of cluster comm: pod "apiserver-6dcfd955f4-p5j6s" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Started |
Started container fix-audit-permissions | |
| (x3) | openshift-oauth-apiserver |
multus |
apiserver-6dcfd955f4-p5j6s |
AddedInterface |
Add eth0 [10.129.0.65/23] from ovn-kubernetes |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Created |
Created container oauth-apiserver | |
openshift-marketplace |
multus |
redhat-operators-pmhhd |
AddedInterface |
Add eth0 [10.129.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6dcfd955f4 to 2 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulCreate |
Created pod: apiserver-6dcfd955f4-fpnbz | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulDelete |
Deleted pod: apiserver-8bdbc6bbb-hgf9w | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-8bdbc6bbb to 1 from 2 | |
| (x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: i-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:18.471628 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:28.473946 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:38.471439 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:48.471152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:58.471998 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:58.472835 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:27:58.472864 1 cmd.go:105] timed out waiting for the condition |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.331s (1.331s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 976ms (976ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(b84dbee2ca34acbd577be186be57bde9fab5ebd6ea535056a23a6c1838d9fcbb): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b84dbee2ca34acbd577be186be57bde9fab5ebd6ea535056a23a6c1838d9fcbb" Netns:"/var/run/netns/d33ee7b5-a1d2-4251-965b-e29e9486753c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=b84dbee2ca34acbd577be186be57bde9fab5ebd6ea535056a23a6c1838d9fcbb;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Started |
Started container registry-server | |
| (x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
StartingNewRevision |
new revision 13 triggered by "required configmap/etcd-pod has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:18.471628 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:28.473946 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:38.471439 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:48.471152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:58.471998 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:58.472835 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:27:58.472864 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-13 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-13 -n openshift-etcd because it was missing | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(90add04528b0706bac38278d57432770036d9c114e9bfac072add872b0d096ec): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"90add04528b0706bac38278d57432770036d9c114e9bfac072add872b0d096ec" Netns:"/var/run/netns/199028f8-d3cb-4df2-a226-70e306733335" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=90add04528b0706bac38278d57432770036d9c114e9bfac072add872b0d096ec;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Killing |
Stopping container oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
Killing |
Stopping container openshift-apiserver | |
openshift-marketplace |
kubelet |
redhat-operators-pmhhd |
Killing |
Stopping container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (crashlooping container is waiting in oauth-openshift-67d88f768b-wblgk pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829610 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829610 |
SuccessfulCreate |
Created pod: collect-profiles-28829610-rrpw2 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest_openshift" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"oauth-apiserver" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"oauth-apiserver" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"oauth-openshift" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest_openshift"}] | |
| (x28) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 11 triggered by "required configmap/config has changed" |
| (x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:44.055363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:54.056329 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:28:04.056500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:28:14.056228 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:28:24.056670 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:28:24.058084 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:28:24.058164 1 cmd.go:105] timed out waiting for the condition |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-28829610-rrpw2_openshift-operator-lifecycle-manager_44feee4c-23e8-4985-8fda-5b24d757ea66_0(a86dedabebf12106e8607eb6b9a0c5b379d43989e651194599f5198cfda8c348): error adding pod openshift-operator-lifecycle-manager_collect-profiles-28829610-rrpw2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a86dedabebf12106e8607eb6b9a0c5b379d43989e651194599f5198cfda8c348" Netns:"/var/run/netns/f2935022-8d1e-4bae-8100-324bdc4b3eb3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28829610-rrpw2;K8S_POD_INFRA_CONTAINER_ID=a86dedabebf12106e8607eb6b9a0c5b379d43989e651194599f5198cfda8c348;K8S_POD_UID=44feee4c-23e8-4985-8fda-5b24d757ea66" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2/44feee4c-23e8-4985-8fda-5b24d757ea66]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-28829610-rrpw2 in out of cluster comm: pod "collect-profiles-28829610-rrpw2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x12) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 11: configmaps "sa-token-signing-certs-11" already exists |
| (x12) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver: configmaps "sa-token-signing-certs-11" already exists |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container prom-label-proxy | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.86/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Killing |
Stopping container alertmanager | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-28829610-rrpw2_openshift-operator-lifecycle-manager_44feee4c-23e8-4985-8fda-5b24d757ea66_0(e024df2622f37d505d4ee74206b2e6b920401e7cc0e42351b9532f630c68b3fc): error adding pod openshift-operator-lifecycle-manager_collect-profiles-28829610-rrpw2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e024df2622f37d505d4ee74206b2e6b920401e7cc0e42351b9532f630c68b3fc" Netns:"/var/run/netns/739279f1-2f7c-44ab-af06-4b11b1f8de26" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28829610-rrpw2;K8S_POD_INFRA_CONTAINER_ID=e024df2622f37d505d4ee74206b2e6b920401e7cc0e42351b9532f630c68b3fc;K8S_POD_UID=44feee4c-23e8-4985-8fda-5b24d757ea66" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2/44feee4c-23e8-4985-8fda-5b24d757ea66]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-28829610-rrpw2 in out of cluster comm: pod "collect-profiles-28829610-rrpw2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b41026ea-c24b-4e7a-bab0-1610c0bf4902_0(ec846b740978340c561b4192f7ee2a7821c90bba913faec05dce5964e27e2884): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"ec846b740978340c561b4192f7ee2a7821c90bba913faec05dce5964e27e2884" Netns:"/var/run/netns/1743d56e-c696-4b4c-9530-7f09b40063b3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=ec846b740978340c561b4192f7ee2a7821c90bba913faec05dce5964e27e2884;K8S_POD_UID=b41026ea-c24b-4e7a-bab0-1610c0bf4902" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: [openshift-monitoring/alertmanager-main-0/b41026ea-c24b-4e7a-bab0-1610c0bf4902:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 ec846b740978340c561b4192f7ee2a7821c90bba913faec05dce5964e27e2884 network default NAD default] [openshift-monitoring/alertmanager-main-0 ec846b740978340c561b4192f7ee2a7821c90bba913faec05dce5964e27e2884 network default NAD default] pod deleted before sandbox ADD operation began ' ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-1_331ee2a5-b7cb-4728-83f0-ad74208d78f5 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_331ee2a5-b7cb-4728-83f0-ad74208d78f5 became leader | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(c1aa5533057c6da64fe27dfb13e5104d23e5728d0b76b0c9ca8ca45e0116e39b): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c1aa5533057c6da64fe27dfb13e5104d23e5728d0b76b0c9ca8ca45e0116e39b" Netns:"/var/run/netns/0f724caf-3810-413e-b10a-c90dc7564960" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=c1aa5533057c6da64fe27dfb13e5104d23e5728d0b76b0c9ca8ca45e0116e39b;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:44.055363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:54.056329 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:04.056500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:14.056228 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.056670 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.058084 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:28:24.058164 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: kube-apiserver-audit-policies-11,sa-token-signing-certs-11\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:44.055363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:54.056329 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:04.056500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:14.056228 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.056670 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.058084 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:28:24.058164 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: ",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 11" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:44.055363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:54.056329 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:04.056500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:14.056228 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.056670 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.058084 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:28:24.058164 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.67/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b41026ea-c24b-4e7a-bab0-1610c0bf4902_0(e7dfb9bf3689038aa00d826f7bebf5d309339c698971fa5d4b3b25788c7ec08b): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e7dfb9bf3689038aa00d826f7bebf5d309339c698971fa5d4b3b25788c7ec08b" Netns:"/var/run/netns/e8e94e70-d82d-42cb-a246-18411db29282" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=e7dfb9bf3689038aa00d826f7bebf5d309339c698971fa5d4b3b25788c7ec08b;K8S_POD_UID=b41026ea-c24b-4e7a-bab0-1610c0bf4902" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: [openshift-monitoring/alertmanager-main-0/b41026ea-c24b-4e7a-bab0-1610c0bf4902:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 e7dfb9bf3689038aa00d826f7bebf5d309339c698971fa5d4b3b25788c7ec08b network default NAD default] [openshift-monitoring/alertmanager-main-0 e7dfb9bf3689038aa00d826f7bebf5d309339c698971fa5d4b3b25788c7ec08b network default NAD default] pod deleted before sandbox ADD operation began ' ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.87/23] from ovn-kubernetes | |
| (x23) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: kube-apiserver-audit-policies-11,sa-token-signing-certs-11 |
| (x45) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 3 identical entries "routingConfig": map[string]any{"subdomain": string("apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX")}, "servingInfo": map[string]any{"cipherSuites": []any{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, "storageConfig": map[string]any{ "urls": []any{ string("https://10.0.0.3:2379"), string("https://10.0.0.4:2379"), - string("https://10.0.0.5:2379"), string("https://10.0.0.6:2379"), }, }, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
| (x46) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://10.0.0.3:2379,https://10.0.0.4:2379,https://10.0.0.6:2379 |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_2738c4d5-9120-4c40-a913-34ad566cd0a2_0(a1717c5902a63afe5bb867fe760efc92bb6e0fa96c0ae6b1fae2fc133d2c5345): error adding pod openshift-kube-apiserver_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a1717c5902a63afe5bb867fe760efc92bb6e0fa96c0ae6b1fae2fc133d2c5345" Netns:"/var/run/netns/a221dbb5-66af-4702-a27a-7feac8e467be" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=a1717c5902a63afe5bb867fe760efc92bb6e0fa96c0ae6b1fae2fc133d2c5345;K8S_POD_UID=2738c4d5-9120-4c40-a913-34ad566cd0a2" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2/2738c4d5-9120-4c40-a913-34ad566cd0a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x34) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_collect-profiles-28829610-rrpw2_openshift-operator-lifecycle-manager_44feee4c-23e8-4985-8fda-5b24d757ea66_0(5e96b491f1cc6a430a17455742173820381001e2d31ae7045c606fca6fab7d24): error adding pod openshift-operator-lifecycle-manager_collect-profiles-28829610-rrpw2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e96b491f1cc6a430a17455742173820381001e2d31ae7045c606fca6fab7d24" Netns:"/var/run/netns/c491d760-03fa-4e95-b673-fce57db96fe1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-operator-lifecycle-manager;K8S_POD_NAME=collect-profiles-28829610-rrpw2;K8S_POD_INFRA_CONTAINER_ID=5e96b491f1cc6a430a17455742173820381001e2d31ae7045c606fca6fab7d24;K8S_POD_UID=44feee4c-23e8-4985-8fda-5b24d757ea66" Path:"" ERRORED: error configuring pod [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2] networking: Multus: [openshift-operator-lifecycle-manager/collect-profiles-28829610-rrpw2/44feee4c-23e8-4985-8fda-5b24d757ea66]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod collect-profiles-28829610-rrpw2 in out of cluster comm: pod "collect-profiles-28829610-rrpw2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d7dbc56c5-jl6d4 pod)\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well"),Available changed from True to False ("APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 500, err = an error on the server (\"Internal Server Error: \\\"/apis/apps.openshift.io/v1\\\": Post \\\"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\\\": dial tcp 172.30.0.1:443: connect: connection refused\") has prevented the request from succeeding\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 500, err = an error on the server (\"Internal Server Error: \\\"/apis/build.openshift.io/v1\\\": Post \\\"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\\\": dial tcp 172.30.0.1:443: connect: connection refused\") has prevented the request from succeeding") | |
| (x6) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:16.272374 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:26.271495 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:36.271836 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:46.272336 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:56.271500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:27:56.272431 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:27:56.272470 1 cmd.go:105] timed out waiting for the condition |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(b615ee434f96584ce57c36e62f9c2bb2e65510ac7d5b50d9a153efbce2d7ce1c): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b615ee434f96584ce57c36e62f9c2bb2e65510ac7d5b50d9a153efbce2d7ce1c" Netns:"/var/run/netns/cc350a43-c91d-463c-b4a4-de388328603c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=b615ee434f96584ce57c36e62f9c2bb2e65510ac7d5b50d9a153efbce2d7ce1c;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_2738c4d5-9120-4c40-a913-34ad566cd0a2_0(7e783bb2fec3ca6f65e1509787824c64878b9facf43cba15280d3dedd4e8fde7): error adding pod openshift-kube-apiserver_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7e783bb2fec3ca6f65e1509787824c64878b9facf43cba15280d3dedd4e8fde7" Netns:"/var/run/netns/5f72cc97-ae94-4923-9a8f-475ce642f81a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=7e783bb2fec3ca6f65e1509787824c64878b9facf43cba15280d3dedd4e8fde7;K8S_POD_UID=2738c4d5-9120-4c40-a913-34ad566cd0a2" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2/2738c4d5-9120-4c40-a913-34ad566cd0a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:16.272374 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:26.271495 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:36.271836 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:46.272336 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:56.271500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:56.272431 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:27:56.272470 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nInstallerControllerDegraded: missing required resources: configmaps: kube-apiserver-audit-policies-11,sa-token-signing-certs-11\nNodeInstallerDegraded: 1 nodes are failing on revision 10:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:44.055363 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:54.056329 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:04.056500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:14.056228 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.056670 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:28:24.058084 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:28:24.058164 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulCreate |
Created pod: apiserver-79fb6d9f75-tmvdf | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d7dbc56c5-jl6d4 pod)\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d7dbc56c5-jl6d4 pod)",Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7."),Available changed from False to True ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." | |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulDelete |
Deleted pod: apiserver-6d6946f85d-6dknb | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d6946f85d to 2 from 3 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-79fb6d9f75 to 1 from 0 | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_b41026ea-c24b-4e7a-bab0-1610c0bf4902_0(1fe7322d6a8e97d2b933b42f3db9b2849c2950d9aaa2a567986cbcf9f3d52365): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fe7322d6a8e97d2b933b42f3db9b2849c2950d9aaa2a567986cbcf9f3d52365" Netns:"/var/run/netns/ca70fcdc-f9b2-49d2-94d0-5b74532e68e4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=1fe7322d6a8e97d2b933b42f3db9b2849c2950d9aaa2a567986cbcf9f3d52365;K8S_POD_UID=b41026ea-c24b-4e7a-bab0-1610c0bf4902" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: [openshift-monitoring/alertmanager-main-0/b41026ea-c24b-4e7a-bab0-1610c0bf4902:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-monitoring/alertmanager-main-0 1fe7322d6a8e97d2b933b42f3db9b2849c2950d9aaa2a567986cbcf9f3d52365 network default NAD default] [openshift-monitoring/alertmanager-main-0 1fe7322d6a8e97d2b933b42f3db9b2849c2950d9aaa2a567986cbcf9f3d52365 network default NAD default] pod deleted before sandbox ADD operation began ' ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.2.16/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container alertmanager | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:043ca523dadf0cb36a10aad24f14834201493f0c07dacb58f450fad7e6ba1f50" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:bc5f38c51760e689e794a20ca25d598175fa1bdea803e4ff35413b20f7b773f6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d7dbc56c5-jl6d4 pod)" to "All is well",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-web | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 7, desired generation is 8." | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a3497c07ce696a3d6e03fd86a6906ad1907641efd9b6a75728859aa8528fd549" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ff7b9d9ed505a6b67f38f5a8c628d4fd03bd136119e29ee42d8368ef33f23e87" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
Started |
Started container collect-profiles | |
| (x4) | openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829610-rrpw2 |
AddedInterface |
Add eth0 [10.131.0.17/23] from ovn-kubernetes |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829610-rrpw2 |
Created |
Created container collect-profiles | |
openshift-kube-controller-manager |
static-pod-installer |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
| (x20) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.4:10257/healthz": dial tcp 10.0.0.4:10257: connect: connection refused body: |
| (x20) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.4:10257/healthz": dial tcp 10.0.0.4:10257: connect: connection refused |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829610 |
Completed |
Job completed | |
openshift-kube-apiserver |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.68/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829610, condition: Complete | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(707d40e61aa9540ffa955951ffa2a2c6da90502994292d3d408844a39c374cbb): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"707d40e61aa9540ffa955951ffa2a2c6da90502994292d3d408844a39c374cbb" Netns:"/var/run/netns/764e68f9-b148-410f-a5c5-9b3d13935e36" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=707d40e61aa9540ffa955951ffa2a2c6da90502994292d3d408844a39c374cbb;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-scheduler |
kubelet |
installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler |
multus |
installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.88/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"28497\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 24, 31, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc002350720), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" | |
openshift-kube-scheduler |
kubelet |
installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-etcd_cdb4b2b8-e040-46fa-83f0-2d1fda92f559_0(286abe15046596332d8fa7817a7a2c4b8241eb14d6837676fd4870e9ee9a5952): error adding pod openshift-etcd_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"286abe15046596332d8fa7817a7a2c4b8241eb14d6837676fd4870e9ee9a5952" Netns:"/var/run/netns/5b2c27df-4980-4aa0-8602-5853c6b996f0" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd;K8S_POD_NAME=installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=286abe15046596332d8fa7817a7a2c4b8241eb14d6837676fd4870e9ee9a5952;K8S_POD_UID=cdb4b2b8-e040-46fa-83f0-2d1fda92f559" Path:"" ERRORED: error configuring pod [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2/cdb4b2b8-e040-46fa-83f0-2d1fda92f559]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-kube-apiserver_2738c4d5-9120-4c40-a913-34ad566cd0a2_0(1f8bd4f3a1ffc0fe1afb85e876f32bec26cf5eced139e4689a3b3b3e0c4b049e): error adding pod openshift-kube-apiserver_revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1f8bd4f3a1ffc0fe1afb85e876f32bec26cf5eced139e4689a3b3b3e0c4b049e" Netns:"/var/run/netns/ec111bbc-280f-40a6-ad08-3003085d7b7e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=1f8bd4f3a1ffc0fe1afb85e876f32bec26cf5eced139e4689a3b3b3e0c4b049e;K8S_POD_UID=2738c4d5-9120-4c40-a913-34ad566cd0a2" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-kube-apiserver/revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2/2738c4d5-9120-4c40-a913-34ad566cd0a2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-etcd_cdb4b2b8-e040-46fa-83f0-2d1fda92f559_0(daf02ece2a8d47b5a870fc5afa49d87dce1c5227a9e60876d7592c31e77e2314): error adding pod openshift-etcd_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"daf02ece2a8d47b5a870fc5afa49d87dce1c5227a9e60876d7592c31e77e2314" Netns:"/var/run/netns/ee59476e-d25d-45ff-9e8d-a8c69befd808" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd;K8S_POD_NAME=installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=daf02ece2a8d47b5a870fc5afa49d87dce1c5227a9e60876d7592c31e77e2314;K8S_POD_UID=cdb4b2b8-e040-46fa-83f0-2d1fda92f559" Path:"" ERRORED: error configuring pod [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2/cdb4b2b8-e040-46fa-83f0-2d1fda92f559]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-hgf9w |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-j8qz2_openshift-marketplace_dc070060-c1a4-436f-9f33-6d71d8645fa7_0(42782a9a5fcbe85510f858db09911d0031368e5de8bc8b18fe2ecbc88218acf3): error adding pod openshift-marketplace_redhat-marketplace-j8qz2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"42782a9a5fcbe85510f858db09911d0031368e5de8bc8b18fe2ecbc88218acf3" Netns:"/var/run/netns/5a174d18-01a4-4be3-830a-34bd67c6945c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-marketplace;K8S_POD_NAME=redhat-marketplace-j8qz2;K8S_POD_INFRA_CONTAINER_ID=42782a9a5fcbe85510f858db09911d0031368e5de8bc8b18fe2ecbc88218acf3;K8S_POD_UID=dc070060-c1a4-436f-9f33-6d71d8645fa7" Path:"" ERRORED: error configuring pod [openshift-marketplace/redhat-marketplace-j8qz2] networking: Multus: [openshift-marketplace/redhat-marketplace-j8qz2/dc070060-c1a4-436f-9f33-6d71d8645fa7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod redhat-marketplace-j8qz2 in out of cluster comm: pod "redhat-marketplace-j8qz2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2_openshift-etcd_cdb4b2b8-e040-46fa-83f0-2d1fda92f559_0(fe9ba2731f25d8347306869efcf70df0e11a7039c34908ad4279174d29114529): error adding pod openshift-etcd_installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fe9ba2731f25d8347306869efcf70df0e11a7039c34908ad4279174d29114529" Netns:"/var/run/netns/acfea907-9500-4975-bdfd-6141cf7a60bf" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-etcd;K8S_POD_NAME=installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2;K8S_POD_INFRA_CONTAINER_ID=fe9ba2731f25d8347306869efcf70df0e11a7039c34908ad4279174d29114529;K8S_POD_UID=cdb4b2b8-e040-46fa-83f0-2d1fda92f559" Path:"" ERRORED: error configuring pod [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2] networking: Multus: [openshift-etcd/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2/cdb4b2b8-e040-46fa-83f0-2d1fda92f559]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 in out of cluster comm: pod "installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
multus |
apiserver-6dcfd955f4-fpnbz |
AddedInterface |
Add eth0 [10.130.0.89/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Created |
Created container fix-audit-permissions | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
| (x4) | openshift-kube-apiserver |
multus |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes |
openshift-kube-apiserver |
kubelet |
revision-pruner-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 4 to 11 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:18.471628 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:28.473946 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:38.471439 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:48.471152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:58.471998 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:58.472835 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:27:58.472864 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 4; 2 nodes are at revision 10; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 2 nodes are at revision 10; 1 node is at revision 11",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 4; 2 nodes are at revision 10; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 10; 1 node is at revision 11" | |
openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-8bdbc6bbb to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation" to "" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6dcfd955f4 to 3 from 2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulCreate |
Created pod: apiserver-6dcfd955f4-2jcfl | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-8bdbc6bbb |
SuccessfulDelete |
Deleted pod: apiserver-8bdbc6bbb-txb89 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapCreated |
Created ConfigMap/monitoring-shared-config -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 10 to 11 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 10 is the oldest | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'",Progressing changed from True to False ("All is well") | |
openshift-console |
replicaset-controller |
console-54d86f69c8 |
SuccessfulDelete |
Deleted pod: console-54d86f69c8-k5dnq | |
| (x3) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml |
openshift-console |
replicaset-controller |
console-849dfdb48 |
SuccessfulCreate |
Created pod: console-849dfdb48-8n92v | |
openshift-console |
replicaset-controller |
console-849dfdb48 |
SuccessfulCreate |
Created pod: console-849dfdb48-nwk7r | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX returns '503 Service Unavailable'",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 1 replicas available") | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-54d86f69c8 to 1 from 2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-849dfdb48 to 2 | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
static-pod-installer |
installer-7-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Started |
Started container extract-utilities | |
| (x4) | openshift-etcd |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-console |
kubelet |
console-849dfdb48-nwk7r |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-console |
kubelet |
console-849dfdb48-nwk7r |
Created |
Created container console | |
openshift-console |
kubelet |
console-849dfdb48-nwk7r |
Started |
Started container console | |
openshift-console |
multus |
console-849dfdb48-nwk7r |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
| (x7) | openshift-marketplace |
multus |
redhat-marketplace-j8qz2 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Created |
Created container extract-utilities | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 2.27s (2.27s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.075s (1.075s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Started |
Started container registry-server | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" to "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-controller-manager |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.69/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
static-pod-installer |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-txb89 pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-54d86f69c8 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-54d86f69c8 |
SuccessfulDelete |
Deleted pod: console-54d86f69c8-zdmgs | |
openshift-console |
kubelet |
console-54d86f69c8-zdmgs |
Killing |
Stopping container console | |
| (x4) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.4:10259/healthz": dial tcp 10.0.0.4:10259: connect: connection refused body: |
| (x4) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.4:10259/healthz": dial tcp 10.0.0.4:10259: connect: connection refused |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-marketplace |
kubelet |
redhat-marketplace-j8qz2 |
Killing |
Stopping container registry-server | |
| (x11) | openshift-apiserver |
kubelet |
apiserver-6d7dbc56c5-jl6d4 |
ProbeError |
Readiness probe error: Get "https://10.130.0.65:8443/readyz": dial tcp 10.130.0.65:8443: connect: connection refused body: |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container etcd-metrics | |
openshift-etcd |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container etcd-rev | |
| (x8) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-console |
kubelet |
console-849dfdb48-8n92v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-console |
kubelet |
console-849dfdb48-8n92v |
Created |
Created container console | |
openshift-console |
kubelet |
console-849dfdb48-8n92v |
Started |
Started container console | |
openshift-console |
multus |
console-849dfdb48-8n92v |
AddedInterface |
Add eth0 [10.129.0.70/23] from ovn-kubernetes | |
| (x45) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:9980/readyz": dial tcp 10.0.0.6:9980: connect: connection refused |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-8bdbc6bbb-txb89 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-79fb6d9f75-tmvdf |
AddedInterface |
Add eth0 [10.130.0.90/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Started |
Started container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Created |
Created container openshift-apiserver |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Created |
Created container openshift-apiserver-check-endpoints |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Started |
Started container openshift-apiserver-check-endpoints |
| (x46) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:9980/readyz": dial tcp 10.0.0.6:9980: connect: connection refused body: |
| (x4) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
BackOff |
Back-off restarting failed container openshift-apiserver-check-endpoints in pod apiserver-79fb6d9f75-tmvdf_openshift-apiserver(f1323227-85b8-4200-9087-12906e7dcf45) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-8bdbc6bbb-txb89 pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-8bdbc6bbb-txb89 pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
| (x6) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
BackOff |
Back-off restarting failed container openshift-apiserver in pod apiserver-79fb6d9f75-tmvdf_openshift-apiserver(f1323227-85b8-4200-9087-12906e7dcf45) |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in terminated apiserver-8bdbc6bbb-txb89 pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-6dcfd955f4-2jcfl |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Started |
Started container fix-audit-permissions | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2\nNodeInstallerDegraded: 1 nodes are failing on revision 7:\nNodeInstallerDegraded: installer: 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:16.272374 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:26.271495 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:36.271836 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:46.272336 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:56.271500 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:27:56.272431 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-1: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:27:56.272470 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 6; 1 node is at revision 7",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 6; 1 node is at revision 7" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 7 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 7 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 static pod not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ci-op-2fcpj5j6-f6035-2lklf-master-2 is unhealthy" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:38.039435 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:48.040152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:58.039382 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:08.039785 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.039110 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.040214 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:32:18.040256 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
installer errors: installer: i-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:31:38.039435 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:31:48.040152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:31:58.039382 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:32:08.039785 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:32:18.039110 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused W1024 13:32:18.040214 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller": dial tcp 172.30.0.1:443: connect: connection refused F1024 13:32:18.040256 1 cmd.go:105] timed out waiting for the condition | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
multus |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-ensure-env-vars | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is crashed in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is crashed in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (crashlooping container is waiting in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigControllerFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: failed to apply machine config controller manifests: Get "https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-machine-config-operator/rolebindings/machine-os-puller-binding": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
InstallerPodFailed |
Failed to create installer pod for revision 13 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-2": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2": dial tcp 172.30.0.1:443: connect: connection refused | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-openshift-apiserver-apiservice |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x24) | openshift-machine-config-operator |
machineconfigoperator |
machine-config |
OperatorDegraded: MachineConfigPoolsFailed |
Failed to resync 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest because: Get "https://172.30.0.1:443/apis/machineconfiguration.openshift.io/v1/machineconfigpools/master": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
| (x14) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.3:10257/healthz": dial tcp 10.0.0.3:10257: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Startup probe error: Get "https://10.0.0.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Started |
Started container oauth-apiserver |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_7f478016-b1f8-490d-9b7b-e66c65aa27ec became leader | |
| (x4) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Created |
Created container oauth-apiserver |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
InstallerPodFailed |
Failed to create installer pod for revision 11 count 1 on node "ci-op-2fcpj5j6-f6035-2lklf-master-0": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0": dial tcp 172.30.0.1:443: connect: connection refused |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-controller-manager |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-controller-manager |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_78a12a48-1a3b-4830-9dca-b2480514c44a became leader | |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
BackOff |
Back-off restarting failed container oauth-apiserver in pod apiserver-6dcfd955f4-2jcfl_openshift-oauth-apiserver(b3bd6a72-1e58-463b-91fd-bb00ba612ef6) |
| (x7) | openshift-console |
kubelet |
console-849dfdb48-8n92v |
ProbeError |
Startup probe error: Get "https://10.129.0.70:8443/health": dial tcp 10.129.0.70:8443: connect: connection refused body: |
| (x7) | openshift-console |
kubelet |
console-849dfdb48-8n92v |
Unhealthy |
Startup probe failed: Get "https://10.129.0.70:8443/health": dial tcp 10.129.0.70:8443: connect: connection refused |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-controller-manager because it was missing | |
openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
BackOff |
Back-off restarting failed container approver in pod network-node-identity-m577s_openshift-network-node-identity(9490d7dd-6e80-4251-9145-6bd6a36f2177) | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
FailedCreatePodSandBox |
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0_openshift-kube-controller-manager_896c0dcf-3ce6-4e37-a9d6-b81e763cd18d_0(c4c3f4d35baa9c207499bd45d1b41c6030aa2c63d4c4bdb9fc245e5c6350227f): error adding pod openshift-kube-controller-manager_installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4c3f4d35baa9c207499bd45d1b41c6030aa2c63d4c4bdb9fc245e5c6350227f" Netns:"/var/run/netns/7d9430f3-867c-486c-b552-23b936174497" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0;K8S_POD_INFRA_CONTAINER_ID=c4c3f4d35baa9c207499bd45d1b41c6030aa2c63d4c4bdb9fc245e5c6350227f;K8S_POD_UID=896c0dcf-3ce6-4e37-a9d6-b81e763cd18d" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0] networking: Multus: [openshift-kube-controller-manager/installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0/896c0dcf-3ce6-4e37-a9d6-b81e763cd18d]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 in out of cluster comm: pod "installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0" not found ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_66632f8d-aa25-4e3a-8f6b-3bc7321a4444 became leader | |
| (x4) | openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:35d322662a4eeaba7375388ceb256f801355265a75433b867f99c2959a5e05db" already present on machine |
| (x4) | openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Started |
Started container approver |
| (x4) | openshift-network-node-identity |
kubelet |
network-node-identity-m577s |
Created |
Created container approver |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
| (x2) | openshift-kube-controller-manager |
multus |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.71/23] from ovn-kubernetes |
openshift-kube-scheduler |
static-pod-installer |
openshift-kube-scheduler |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-scheduler-cert-syncer | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_282c654e-18a3-458b-b653-5ebc31917c1b became leader | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
openshift-network-node-identity |
ci-op-2fcpj5j6-f6035-2lklf-master-0_026f2466-89b0-4dc2-b2e7-c9a8076b4553 |
ovnkube-identity |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_026f2466-89b0-4dc2-b2e7-c9a8076b4553 became leader | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.72/23] from ovn-kubernetes | |
| (x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d6946f85d to 1 from 2 | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulCreate |
Created pod: apiserver-79fb6d9f75-wm8d6 | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulDelete |
Deleted pod: apiserver-6d6946f85d-8v797 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-79fb6d9f75 to 2 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (crashlooping container is waiting in apiserver-6dcfd955f4-2jcfl pod)\nWellKnownReadyControllerDegraded: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1",Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to get API server IPs: unable to find kube api server endpointLister port: &v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kubernetes\", GenerateName:\"\", Namespace:\"default\", SelfLink:\"\", UID:\"a2c00cc9-5359-4a0b-a8dd-dd7e06820a29\", ResourceVersion:\"31303\", Generation:0, CreationTimestamp:time.Date(2024, time.October, 24, 12, 57, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"endpointslice.kubernetes.io/skip-mirror\":\"true\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:\"kube-apiserver\", Operation:\"Update\", APIVersion:\"v1\", Time:time.Date(2024, time.October, 24, 13, 28, 57, 0, time.Local), FieldsType:\"FieldsV1\", FieldsV1:(*v1.FieldsV1)(0xc003c4cdc8), Subresource:\"\"}}}, Subsets:[]v1.EndpointSubset(nil)} (check kube-apiserver that it deploys correctly)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container guard | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 11",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 11") | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container guard | |
openshift-kube-scheduler |
multus |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
static-pod-installer |
installer-11-retry-1-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/scheduler-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/scheduler-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused") | |
| (x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 0 to 11 because static pod is ready |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/scheduler-kubeconfig\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/scheduler-kubeconfig\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/client-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/client-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
| (x4) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x4) | openshift-apiserver |
kubelet |
apiserver-6d6946f85d-8v797 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
PodUpdated |
Updated Pod/openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-scheduler because it changed | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_eeaae8a4-4a8a-4d24-9acb-693f9283c92c became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container etcd-rev | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_40ef1018-f1da-4ed0-8159-5ac131df6d2d became leader | |
| (x12) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.3:9980/readyz": dial tcp 10.0.0.3:9980: connect: connection refused body: |
| (x12) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.3:9980/readyz": dial tcp 10.0.0.3:9980: connect: connection refused |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 11 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 static pod not found |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_fbfd3232-2c74-4c55-901b-920840d1f678 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:38.039435 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:48.040152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:58.039382 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:08.039785 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.039110 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.040214 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:32:18.040256 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:38.039435 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:48.040152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:58.039382 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:08.039785 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.039110 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.040214 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:32:18.040256 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.91/23] from ovn-kubernetes | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 7 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 0; 1 node is at revision 6; 1 node is at revision 7" to "NodeInstallerProgressing: 1 node is at revision 6; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 1 node is at revision 6; 1 node is at revision 7" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 6; 2 nodes are at revision 7" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 10 to 11 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0\nNodeInstallerDegraded: 1 nodes are failing on revision 11:\nNodeInstallerDegraded: installer: i-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:38.039435 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:48.040152 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:31:58.039382 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:08.039785 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.039110 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: W1024 13:32:18.040214 1 cmd.go:466] Error getting installer pods on current node ci-op-2fcpj5j6-f6035-2lklf-master-0: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: F1024 13:32:18.040256 1 cmd.go:105] timed out waiting for the condition\nNodeInstallerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 10; 1 node is at revision 11" to "NodeInstallerProgressing: 1 node is at revision 10; 2 nodes are at revision 11",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 10; 1 node is at revision 11" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 10; 2 nodes are at revision 11" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 on node ci-op-2fcpj5j6-f6035-2lklf-master-0" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 6 to 7 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 6 is the oldest | |
openshift-kube-scheduler |
multus |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.73/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 10 to 11 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 with revision 10 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-kube-apiserver |
static-pod-installer |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.4834222222222224 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0=0.007520,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1=0.006640,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2=0.007580. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
static-pod-installer |
installer-7-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 7 | |
| (x28) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.3:10259/healthz": dial tcp 10.0.0.3:10259: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 on node ci-op-2fcpj5j6-f6035-2lklf-master-1, Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2]" to "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container guard | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container guard | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_ce35dccc-0d5e-4500-b3cb-21446d7503a3 became leader | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.92/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-controller-manager |
static-pod-installer |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
| (x29) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: Get "https://10.0.0.3:10259/healthz": dial tcp 10.0.0.3:10259: connect: connection refused body: |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_fbfd3232-2c74-4c55-901b-920840d1f678 stopped leading | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Startup probe error: Get "https://10.0.0.3:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it changed | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:7f1ef4ec397a7c90b5c3c5f9235d635ab8818ca402ce8de9bade295053038571" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_ab8935c3-42b1-4486-916a-ce970895d7aa became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 node is at revision 11" to "NodeInstallerProgressing: 1 node is at revision 0; 2 nodes are at revision 11",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 node is at revision 11" to "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 11" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 0 to 11 because static pod is ready | |
| (x12) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused body: |
| (x12) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:10257/healthz": dial tcp 10.0.0.6:10257: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:39eb55fbaba0601f5bf36a2eaccc93fd172d1ab7d7d74ee36d36c9f29f7dd61b" already present on machine | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_d29f0a86-47df-4361-8766-f81a84f5d356 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:2fd1cfd5fb1f91c5b02bce08b256ac0047f0df7072ecc332c2395d43531c0113" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
cert-recovery-controller |
openshift-kube-controller-manager |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused | |
openshift-dns |
endpoint-slice-controller |
dns-default |
TopologyAwareHintsEnabled |
Topology Aware Hints has been enabled, addressType: IPv4 | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_c59f2f9a-ab06-4901-899f-083101c56416 became leader | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-0 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-0 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-1 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-1 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-1 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-c-z8hfz in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-master-2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-master-2 in Controller | |
default |
node-controller |
ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 |
RegisteredNode |
Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 event: Registered Node ci-op-2fcpj5j6-f6035-2lklf-worker-b-hj8l2 in Controller | |
openshift-etcd |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.93/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 10 to 11 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 11"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 10; 2 nodes are at revision 11" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11" | |
openshift-apiserver |
multus |
apiserver-79fb6d9f75-wm8d6 |
AddedInterface |
Add eth0 [10.129.0.74/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulCreate |
Created pod: apiserver-79fb6d9f75-d2mgw | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 11 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 static pod not found | |
openshift-apiserver |
replicaset-controller |
apiserver-6d6946f85d |
SuccessfulDelete |
Deleted pod: apiserver-6d6946f85d-wdq7x | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d6946f85d to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-79fb6d9f75 to 3 from 2 | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6d6946f85d-wdq7x |
Killing |
Stopping container openshift-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.5 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0=0.007520,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1=0.006640,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2=0.007580. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d6946f85d-wdq7x pod)" | |
openshift-etcd |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container etcd-readyz | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 6 to 7 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 6; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" | |
| (x13) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.4:9980/readyz": dial tcp 10.0.0.4:9980: connect: connection refused |
| (x13) | openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.4:9980/readyz": dial tcp 10.0.0.4:9980: connect: connection refused body: |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
certified-operators-hn5hn |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 869ms (869ms including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.261s (1.261s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Created |
Created container registry-server | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
static-pod-installer |
installer-11-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 11 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ci-op-2fcpj5j6-f6035-2lklf-master-2" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 on node ci-op-2fcpj5j6-f6035-2lklf-master-2" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-marketplace |
kubelet |
certified-operators-hn5hn |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
KubeAPIReadyz |
readyz=true | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container guard | |
openshift-kube-apiserver |
multus |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container guard | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:a5ffe3489a5c049cb2bae31ba55fa7e3a7654d93d833a78f6c0506d2d7c1b272" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:f99379698faa42b7db0e7367c8e7f738a69f699f414a7567a5593a530fb6723d" already present on machine | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container etcd-rev | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container etcd-rev | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
PodUpdated |
Updated Pod/kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it changed | |
openshift-etcd |
kubelet |
etcd-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: Get "https://10.0.0.4:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 0 to 11 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 11"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 node is at revision 0; 2 nodes are at revision 11" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.2222222222222223 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0=0.007520,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1=0.006640,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2=0.007580. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-etcd |
kubelet |
etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: Get "https://10.0.0.4:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 12 to 13 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 13\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 12; 2 nodes are at revision 13\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-2fcpj5j6-f6035-2lklf-master-0 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 13\nEtcdMembersAvailable: 2 of 3 members are available, ci-op-2fcpj5j6-f6035-2lklf-master-0 is unhealthy" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: ci-op-2fcpj5j6-f6035-2lklf-master-0, memberID: 2dee09550bc489af, dbSize: 114200576, dbInUse: 60092416, leader ID: 5953405296437866173 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentSuccess |
etcd member has been defragmented: ci-op-2fcpj5j6-f6035-2lklf-master-0, memberID: 3309593037038193071 | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
community-operators-5bkrv |
AddedInterface |
Add eth0 [10.128.0.75/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.267s (1.267s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1s (1s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-5bkrv |
Killing |
Stopping container registry-server | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is crashlooping in apiserver-6d6946f85d-wdq7x pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-79fb6d9f75-d2mgw pod, 2 containers are crashlooping in terminated apiserver-6d6946f85d-wdq7x pod)" | |
openshift-apiserver |
multus |
apiserver-79fb6d9f75-d2mgw |
AddedInterface |
Add eth0 [10.128.0.76/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-79fb6d9f75-d2mgw pod, 2 containers are crashlooping in terminated apiserver-6d6946f85d-wdq7x pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-79fb6d9f75-d2mgw pod)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] to [{"operator" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"} {"openshift-apiserver" "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-79fb6d9f75-d2mgw pod)" to "All is well" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentSuccess |
etcd member has been defragmented: ci-op-2fcpj5j6-f6035-2lklf-master-1, memberID: 15496572367184033116 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: ci-op-2fcpj5j6-f6035-2lklf-master-1, memberID: d70ee276ae23755c, dbSize: 113631232, dbInUse: 60035072, leader ID: 5953405296437866173 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.2222222222222223 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0=0.007168,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1=0.006104,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2=0.007580. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-defrag-controller-defragcontroller |
etcd-operator |
DefragControllerDefragmentAttempt |
Attempting defrag on member: ci-op-2fcpj5j6-f6035-2lklf-master-2, memberID: 529ebe931a1baebd, dbSize: 114241536, dbInUse: 60059648, leader ID: 5953405296437866173 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-fsynccontroller |
etcd-operator |
EtcdLeaderChangeMetrics |
Detected leader change increase of 2.2222222222222223 over 5 minutes on "GCP"; disk metrics are: etcd-ci-op-2fcpj5j6-f6035-2lklf-master-0=0.007740,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-1=0.005000,etcd-ci-op-2fcpj5j6-f6035-2lklf-master-2=0.007016. Most often this is as a result of inadequate storage or sometimes due to networking issues. | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
redhat-marketplace-7d926 |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 4.758s (4.759s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 950ms (950ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-7d926 |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829625 |
SuccessfulCreate |
Created pod: collect-profiles-28829625-629m7 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829625-629m7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829625 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829625-629m7 |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829625-629m7 |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829625-629m7 |
AddedInterface |
Add eth0 [10.131.0.18/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829625, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829625 |
Completed |
Job completed | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CreatedSCCRanges |
created SCC ranges for e2e-token-request-bcx7q namespace | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable changed from True to False ("CertRotationTimeUpgradeable: configmap[\"openshift-config\"]/unsupported-cert-rotation-config .data[\"base\"]==\"2y\"") | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
SecretUpdated |
Updated Secret/aggregator-client-signer -n openshift-kube-apiserver-operator because it changed | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it changed | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
SecretCreateFailed |
Failed to create Secret/: secrets "aggregator-client-signer" already exists | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
SignerUpdateRequired |
"aggregator-client-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver |
cert-regeneration-controller-cert-rotation-controller-AggregatorProxyClientCert-certrotationcontroller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
RotationError |
secrets "aggregator-client-signer" already exists | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert: signer update openshift-kube-apiserver/aggregator-client | |
openshift-kube-apiserver |
cert-regeneration-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
SignerUpdateRequired |
"aggregator-client-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"aggregator-client-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: secret doesn't exist | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Upgradeable changed from False to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.") | |
openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/aggregator-client-ca | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/aggregator-client-ca | |
openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/aggregator-client-ca | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreateFailed |
Failed to create Secret/: secrets "aggregator-client-signer" already exists | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/aggregator-client-ca | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-AggregatorProxyClientCert-certrotationcontroller |
kube-apiserver-operator |
RotationError |
secrets "aggregator-client-signer" already exists | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/aggregator-client-ca | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nCertRotation_AggregatorProxyClientCert_Degraded: secrets \"aggregator-client-signer\" already exists" to "NodeControllerDegraded: All master nodes are ready" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/aggregator-client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nCertRotation_AggregatorProxyClientCert_Degraded: secrets \"aggregator-client-signer\" already exists" |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/aggregator-client-ca | |
openshift-authentication-operator |
oauth-apiserver-encryption-key-controller-encryptionkeycontroller |
authentication-operator |
EncryptionKeyCreated |
Secret "encryption-key-openshift-oauth-apiserver-1" successfully created: ["key-does-not-exist"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-key-controller-encryptionkeycontroller |
openshift-apiserver-operator |
EncryptionKeyCreated |
Secret "encryption-key-openshift-apiserver-1" successfully created: ["routes-key-does-not-exist"] | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-4vz46 |
AddedInterface |
Add eth0 [10.129.0.75/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Created |
Created container extract-content | |
openshift-marketplace |
multus |
certified-operators-jtb9t |
AddedInterface |
Add eth0 [10.129.0.76/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.267s (1.267s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-7f98b5f8b5 to 2 from 1 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-58f6d575c4 to 1 from 2 | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-a3vgnb8q5kf4b -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-7f98b5f8b5 to 1 | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Killing |
Stopping container metrics-server | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-monitoring |
replicaset-controller |
metrics-server-58f6d575c4 |
SuccessfulDelete |
Deleted pod: metrics-server-58f6d575c4-k7fwq | |
openshift-monitoring |
replicaset-controller |
metrics-server-7f98b5f8b5 |
SuccessfulCreate |
Created pod: metrics-server-7f98b5f8b5-9v6xq | |
openshift-monitoring |
replicaset-controller |
metrics-server-7f98b5f8b5 |
SuccessfulCreate |
Created pod: metrics-server-7f98b5f8b5-p26dm | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.064s (1.064s including waiting). Image size: 896974229 bytes. | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-openshift-apiserver -n openshift-config-managed because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Started |
Started container registry-server | |
openshift-monitoring |
multus |
metrics-server-7f98b5f8b5-p26dm |
AddedInterface |
Add eth0 [10.131.0.19/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
openshift-apiserver-operator |
EncryptionResourceAdded |
Resource "routes.route.openshift.io" was added to encryption config without write key | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 2.182s (2.182s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Started |
Started container extract-content | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-key-controller-encryptionkeycontroller |
kube-apiserver-operator |
EncryptionKeyCreated |
Secret "encryption-key-openshift-kube-apiserver-1" successfully created: ["key-does-not-exist"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/encryption-config has been created" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/encryption-config -n openshift-apiserver because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Started |
Started container metrics-server | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
SecretCreated |
Created Secret/encryption-config-openshift-oauth-apiserver -n openshift-config-managed because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.246s (1.246s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Created |
Created container registry-server | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Created |
Created container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" in 3.391s (3.391s including waiting). Image size: 451216469 bytes. | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
EncryptionResourceAdded |
Resource "oauthaccesstokens.oauth.openshift.io" was added to encryption config without write key | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
EncryptionResourceAdded |
Resource "oauthauthorizetokens.oauth.openshift.io" was added to encryption config without write key | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Started |
Started container registry-server | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-2 -n openshift-apiserver because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-2 -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/encryption-config -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{string(\"https://10.0.0.3:2379\"), string(\"https://10.0.0.4:2379\"), string(\"https://10.0.0.6:2379\")},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/encryption-config has been created" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveEncryptionConfigChanged |
encryption config file changed from [] to /var/run/secrets/encryption-config/encryption-config | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 2 triggered by "optional secret/encryption-config has been created" | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveEncryptionConfigChanged |
encryption config file changed from [] to /var/run/secrets/encryption-config/encryption-config | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "apiServerArguments": map[string]any{ + "encryption-provider-config": []any{string("/var/run/secrets/encryption-config/encryption-config")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, }, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "projectConfig": map[string]any{"projectRequestMessage": string("")}, ... // 3 identical entries } | |
openshift-marketplace |
kubelet |
redhat-operators-4vz46 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-jtb9t |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
EncryptionResourceAdded |
Resource "secrets" was added to encryption config without write key | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-openshift-kube-apiserver -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
EncryptionResourceAdded |
Resource "configmaps" was added to encryption config without write key | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-2 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 12 triggered by "optional secret/encryption-config has been created" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveEncryptionConfigChanged |
encryption config file changed from [] to /etc/kubernetes/static-pod-resources/secrets/encryption-config/encryption-config |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, + "encryption-provider-config": []any{ + string("/etc/kubernetes/static-pod-resources/secrets/encryption-config/encryption-config"), + }, "etcd-servers": []any{string("https://10.0.0.3:2379"), string("https://10.0.0.4:2379"), string("https://10.0.0.6:2379"), string("https://localhost:2379")}, "feature-gates": []any{string("AWSEFSDriverVolumeMetrics=true"), string("AdminNetworkPolicy=true"), string("AlibabaPlatform=true"), string("AzureWorkloadIdentity=true"), ...}, ... // 4 identical entries }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries } |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
SecretCreated |
Created Secret/encryption-config-2 -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 2 triggered by "optional secret/encryption-config has been created" | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-12 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-678f64f7c9 to 1 from 0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 7, desired generation is 8.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 7, desired generation is 8." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 8, desired generation is 9.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 7, desired generation is 8." | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Killing |
Stopping container openshift-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulDelete |
Deleted pod: apiserver-6dcfd955f4-2jcfl | |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulCreate |
Created pod: apiserver-678f64f7c9-h6bfx | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-669dcc6dbc to 1 from 0 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6dcfd955f4 to 2 from 3 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-669dcc6dbc |
SuccessfulCreate |
Created pod: apiserver-669dcc6dbc-rbnvm | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-79fb6d9f75 to 2 from 3 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6.") | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
Killing |
Stopping container oauth-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulDelete |
Deleted pod: apiserver-79fb6d9f75-d2mgw | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-12 -n openshift-kube-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 8, desired generation is 9.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 7, desired generation is 8." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 8, desired generation is 9." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 8, desired generation is 9." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-12 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-669dcc6dbc |
SuccessfulDelete |
Deleted pod: apiserver-669dcc6dbc-rbnvm | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-669dcc6dbc to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 6, desired generation is 7." | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65b45c6554 to 1 from 0 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulCreate |
Created pod: apiserver-65b45c6554-p4w47 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-2jcfl |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-monitoring |
replicaset-controller |
metrics-server-58f6d575c4 |
SuccessfulDelete |
Deleted pod: metrics-server-58f6d575c4-qj8vk | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-58f6d575c4 to 0 from 1 | |
openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Killing |
Stopping container metrics-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-12 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 13 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 12 triggered by "optional secret/encryption-config has been created" | |
| (x7) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-13 -n openshift-kube-apiserver because it was missing | |
| (x9) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-d2mgw |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-13 -n openshift-kube-apiserver because it was missing | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-k7fwq |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
multus |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.77/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.94/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-13 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
multus |
apiserver-65b45c6554-p4w47 |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-13 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Started |
Started container oauth-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-12-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulCreate |
Created pod: apiserver-65b45c6554-tglr4 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6dcfd955f4 to 1 from 2 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65b45c6554 to 2 from 1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulDelete |
Deleted pod: apiserver-6dcfd955f4-p5j6s | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Killing |
Stopping container oauth-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-13 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Created |
Created container extract-utilities | |
openshift-marketplace |
multus |
community-operators-l2rlg |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-13 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.772s (1.772s including waiting). Image size: 1110454519 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-13 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.022s (1.022s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Created |
Created container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-13 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 13 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 11; 0 nodes have achieved new revision 12"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11; 0 nodes have achieved new revision 12" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 11 to 12 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 11 is the oldest | |
openshift-marketplace |
kubelet |
community-operators-l2rlg |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.78/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.95/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 11 to 13 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 11 is the oldest | |
openshift-kube-apiserver |
kubelet |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 11; 0 nodes have achieved new revision 12" to "NodeInstallerProgressing: 3 nodes are at revision 11; 0 nodes have achieved new revision 13",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11; 0 nodes have achieved new revision 12" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11; 0 nodes have achieved new revision 13" | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-p5j6s |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
multus |
apiserver-65b45c6554-tglr4 |
AddedInterface |
Add eth0 [10.129.0.79/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Created |
Created container oauth-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Created |
Created container openshift-apiserver-check-endpoints | |
| (x9) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
FailedMount |
MountVolume.SetUp failed for volume "client-ca-bundle" : secret "metrics-server-3g41mr2412eu" not found |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
multus |
apiserver-678f64f7c9-h6bfx |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6dcfd955f4 |
SuccessfulDelete |
Deleted pod: apiserver-6dcfd955f4-fpnbz | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-65b45c6554 to 3 from 2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulCreate |
Created pod: apiserver-65b45c6554-984d7 | |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6dcfd955f4 to 0 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulDelete |
Deleted pod: apiserver-79fb6d9f75-wm8d6 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-79fb6d9f75 to 1 from 2 | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-678f64f7c9 to 2 from 1 | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulCreate |
Created pod: apiserver-678f64f7c9-qqtmg | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.80/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
Started |
Started container metrics-server | |
openshift-monitoring |
multus |
metrics-server-7f98b5f8b5-9v6xq |
AddedInterface |
Add eth0 [10.128.2.17/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
Created |
Created container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" already present on machine | |
| (x7) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x8) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x8) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
| (x8) | openshift-monitoring |
kubelet |
metrics-server-58f6d575c4-qj8vk |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-oauth-apiserver |
kubelet |
apiserver-6dcfd955f4-fpnbz |
ProbeError |
Readiness probe error: Get "https://10.130.0.89:8443/readyz": dial tcp 10.130.0.89:8443: connect: connection refused body: | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_681bb29a-23cc-44a5-8b0a-c3da65d829ca became leader | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-65b45c6554-984d7 |
AddedInterface |
Add eth0 [10.130.0.96/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Created |
Created container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-65b45c6554-984d7 pod)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-65b45c6554-984d7 pod)" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
EncryptionKeyPromoted |
Promoting key "1" for resource "oauthaccesstokens.oauth.openshift.io" to write key | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
EncryptionKeyPromoted |
Promoting key "1" for resource "oauthauthorizetokens.oauth.openshift.io" to write key | |
openshift-authentication-operator |
oauth-apiserver-encryption-state-controller-encryptionstatecontroller |
authentication-operator |
SecretUpdated |
Updated Secret/encryption-config-openshift-oauth-apiserver -n openshift-config-managed because it changed | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretUpdated |
Updated Secret/encryption-config -n openshift-oauth-apiserver because it changed | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
StartingNewRevision |
new revision 3 triggered by "optional secret/encryption-config has changed" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-3 -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
SecretCreated |
Created Secret/encryption-config-3 -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 3 triggered by "optional secret/encryption-config has changed" | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulDelete |
Deleted pod: apiserver-65b45c6554-984d7 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulCreate |
Created pod: apiserver-6d668d4fc7-q9mjb | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 7, desired generation is 8.") | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-65b45c6554 to 2 from 3 | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Killing |
Stopping container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 7, desired generation is 8." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
| (x11) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-wm8d6 |
ProbeError |
Readiness probe error: Get "https://10.129.0.74:8443/readyz": dial tcp 10.129.0.74:8443: connect: connection refused body: |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-678f64f7c9-qqtmg |
AddedInterface |
Add eth0 [10.129.0.81/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Started |
Started container openshift-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok livez check failed | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-79fb6d9f75 to 0 from 1 | |
openshift-apiserver |
replicaset-controller |
apiserver-79fb6d9f75 |
SuccessfulDelete |
Deleted pod: apiserver-79fb6d9f75-tmvdf | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-678f64f7c9 to 3 from 2 | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulCreate |
Created pod: apiserver-678f64f7c9-lh9lm | |
openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-network-diagnostics |
check-endpoint |
ci-op-2fcpj5j6-f6035-2lklf-worker-a-8hj4x |
ConnectivityOutageDetected |
Connectivity outage detected: kubernetes-apiserver-endpoint-ci-op-2fcpj5j6-f6035-2lklf-master-0: failed to establish a TCP connection to 10.0.0.3:6443: dial tcp 10.0.0.3:6443: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-984d7 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-marketplace |
multus |
redhat-marketplace-4q6c9 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Started |
Started container extract-content | |
openshift-oauth-apiserver |
multus |
apiserver-6d668d4fc7-q9mjb |
AddedInterface |
Add eth0 [10.130.0.97/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 3.621s (3.621s including waiting). Image size: 967040755 bytes. | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Created |
Created container extract-content | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Started |
Started container fix-audit-permissions | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-q9mjb |
Created |
Created container oauth-apiserver | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 917ms (918ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Started |
Started container registry-server | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulDelete |
Deleted pod: apiserver-65b45c6554-p4w47 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulCreate |
Created pod: apiserver-6d668d4fc7-d9msv | |
| (x4) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation" |
openshift-marketplace |
kubelet |
redhat-marketplace-4q6c9 |
Killing |
Stopping container registry-server | |
| (x2) | openshift-apiserver |
kubelet |
apiserver-79fb6d9f75-tmvdf |
ProbeError |
Readiness probe error: Get "https://10.130.0.90:8443/readyz": dial tcp 10.130.0.90:8443: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 11 to 13 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 11; 0 nodes have achieved new revision 13" to "NodeInstallerProgressing: 2 nodes are at revision 11; 1 node is at revision 13",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 11; 0 nodes have achieved new revision 13" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 11; 1 node is at revision 13" | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-p4w47 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 11 to 13 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 11 is the oldest |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.98/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-oauth-apiserver |
multus |
apiserver-6d668d4fc7-d9msv |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Started |
Started container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Killing |
Stopping container oauth-apiserver | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulCreate |
Created pod: apiserver-6d668d4fc7-x2q4p | |
| (x5) | openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set apiserver-6d668d4fc7 to 3 from 2 |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-65b45c6554 |
SuccessfulDelete |
Deleted pod: apiserver-65b45c6554-tglr4 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-apiserver |
multus |
apiserver-678f64f7c9-lh9lm |
AddedInterface |
Add eth0 [10.130.0.99/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
openshift-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-config-openshift-apiserver -n openshift-config-managed because it changed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
openshift-apiserver-operator |
EncryptionKeyPromoted |
Promoting key "1" for resource "routes.route.openshift.io" to write key | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-config -n openshift-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
StartingNewRevision |
new revision 3 triggered by "optional secret/encryption-config has changed" | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_2c11a245-aabf-474d-b1e0-b0a8eb52696c became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-3 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-3 -n openshift-apiserver because it was missing | |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x9) | openshift-oauth-apiserver |
kubelet |
apiserver-65b45c6554-tglr4 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "optional secret/encryption-config has changed" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
multus |
apiserver-6d668d4fc7-x2q4p |
AddedInterface |
Add eth0 [10.129.0.82/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-x2q4p |
Started |
Started container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-x2q4p pod)" | |
| (x5) | openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed |
openshift-apiserver |
replicaset-controller |
apiserver-6545b7bd68 |
SuccessfulCreate |
Created pod: apiserver-6545b7bd68-hjg8d | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6545b7bd68 to 1 from 0 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-678f64f7c9 to 2 from 3 | |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulDelete |
Deleted pod: apiserver-678f64f7c9-lh9lm | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 9, desired generation is 10.") | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 9, desired generation is 10." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-x2q4p pod)" to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("EncryptionMigrationControllerProgressing: migrating resources to a new write key: [oauth.openshift.io/oauthaccesstokens oauth.openshift.io/oauthauthorizetokens]") | |
| (x7) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x7) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-lh9lm |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-authentication-operator |
oauth-apiserver-encryption-migration-controller-encryptionmigrationcontroller |
authentication-operator |
SecretUpdated |
Updated Secret/encryption-key-openshift-oauth-apiserver-1 -n openshift-config-managed because it changed |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
multus |
apiserver-6545b7bd68-hjg8d |
AddedInterface |
Add eth0 [10.130.0.100/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-kube-apiserver |
cert-regeneration-controller |
openshift-kube-apiserver |
ControlPlaneTopology |
unable to get control plane topology, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-hjg8d |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulDelete |
Deleted pod: apiserver-678f64f7c9-qqtmg | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-678f64f7c9 to 1 from 2 | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
replicaset-controller |
apiserver-6545b7bd68 |
SuccessfulCreate |
Created pod: apiserver-6545b7bd68-wnbqx | |
| (x4) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" |
| (x6) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
| (x6) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 11; 1 node is at revision 13" to "NodeInstallerProgressing: 1 node is at revision 11; 2 nodes are at revision 13",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 11; 1 node is at revision 13" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 11; 2 nodes are at revision 13" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 11 to 13 because static pod is ready | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 11 to 13 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 with revision 11 is the oldest |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
| (x8) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-qqtmg |
ProbeError |
Readiness probe error: Get "https://10.129.0.81:8443/readyz": dial tcp 10.129.0.81:8443: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
static-pod-installer |
installer-13-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 13 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-apiserver |
multus |
apiserver-6545b7bd68-wnbqx |
AddedInterface |
Add eth0 [10.129.0.83/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-wnbqx |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Killing |
Stopping container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
| (x3) | openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
(combined from similar events): Scaled up replica set apiserver-6545b7bd68 to 3 from 2 |
openshift-apiserver |
replicaset-controller |
apiserver-678f64f7c9 |
SuccessfulDelete |
Deleted pod: apiserver-678f64f7c9-h6bfx | |
openshift-apiserver |
replicaset-controller |
apiserver-6545b7bd68 |
SuccessfulCreate |
Created pod: apiserver-6545b7bd68-jsb4b | |
| (x3) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well") |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-0_dff0c1ea-cceb-46d3-8401-290e0da91f7e became leader | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
| (x7) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x8) | openshift-apiserver |
kubelet |
apiserver-678f64f7c9-h6bfx |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
certified-operators-8dppf |
AddedInterface |
Add eth0 [10.129.0.84/23] from ovn-kubernetes | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.621s (1.621s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Started |
Started container extract-content | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Started |
Started container fix-audit-permissions | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 977ms (977ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Created |
Created container registry-server | |
openshift-apiserver |
multus |
apiserver-6545b7bd68-jsb4b |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:3bb0c7580e7044dd06a53e02ebb5d819447eeb88badd829a89757ccecf135cb4" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6545b7bd68-jsb4b |
Created |
Created container openshift-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-marketplace |
kubelet |
certified-operators-8dppf |
Killing |
Stopping container registry-server | |
| (x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "All is well" |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("EncryptionMigrationControllerProgressing: migrating resources to a new write key: [route.openshift.io/routes]") | |
openshift-marketplace |
multus |
redhat-operators-lmrfh |
AddedInterface |
Add eth0 [10.129.0.85/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 2.611s (2.611s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 954ms (954ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Started |
Started container registry-server | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from True to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-encryption-migration-controller-encryptionmigrationcontroller |
openshift-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-key-openshift-apiserver-1 -n openshift-config-managed because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
EncryptionKeyPromoted |
Promoting key "1" for resource "secrets" to write key | |
openshift-marketplace |
kubelet |
redhat-operators-lmrfh |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-config-openshift-kube-apiserver -n openshift-config-managed because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-state-controller-encryptionstatecontroller |
kube-apiserver-operator |
EncryptionKeyPromoted |
Promoting key "1" for resource "configmaps" to write key | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 14 triggered by "optional secret/encryption-config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-config -n openshift-kube-apiserver because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 13"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 11; 2 nodes are at revision 13" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 13" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-14 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-14 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 11 to 13 because static pod is ready |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 14 triggered by "optional secret/encryption-config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.86/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
multus |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.101/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 13; 0 nodes have achieved new revision 14"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 13" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 13; 0 nodes have achieved new revision 14" | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
multus |
community-operators-s6thr |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 977ms (977ms including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Started |
Started container extract-content | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 13 to 14 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 13 is the oldest |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 878ms (878ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Created |
Created container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
multus |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.87/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-marketplace |
kubelet |
community-operators-s6thr |
Killing |
Stopping container registry-server | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829640-fvzjr |
AddedInterface |
Add eth0 [10.131.0.20/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829640-fvzjr |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829640-fvzjr |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829640-fvzjr |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829640 |
SuccessfulCreate |
Created pod: collect-profiles-28829640-fvzjr | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829640 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
static-pod-installer |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 14 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28829595 | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_bc6d83ee-e249-4578-9aba-7962c9132d28 became leader | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829640 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829640, condition: Complete | |
| (x46) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-2p9x6 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.541s (1.541s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Started |
Started container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 966ms (966ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-2p9x6 |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 13 to 14 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 13; 0 nodes have achieved new revision 14" to "NodeInstallerProgressing: 2 nodes are at revision 13; 1 node is at revision 14",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 13; 0 nodes have achieved new revision 14" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 13; 1 node is at revision 14" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 13 to 14 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 13 is the oldest |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.102/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
static-pod-installer |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 14 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x16) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_ed693b17-0283-495a-9e2d-635242e97169 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 13 to 14 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 13; 1 node is at revision 14" to "NodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 13; 1 node is at revision 14" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 13; 2 nodes are at revision 14" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 13 to 14 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 with revision 13 is the oldest | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
static-pod-installer |
installer-14-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 14 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:17697/healthz": dial tcp 10.0.0.6:17697: connect: connection refused body: | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Readiness probe failed: Get "https://10.0.0.6:17697/healthz": dial tcp 10.0.0.6:17697: connect: connection refused | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_0d468135-44f3-4260-9c83-3fc9575b2ee6 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14" to "EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/configmaps core/secrets]\nNodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14" | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6db848448f to 2 | |
openshift-console |
replicaset-controller |
console-6db848448f |
SuccessfulCreate |
Created pod: console-6db848448f-lt54s | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-849dfdb48 to 1 from 2 | |
openshift-console |
replicaset-controller |
console-6db848448f |
SuccessfulDelete |
Deleted pod: console-6db848448f-lt54s | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-6689f89885 to 2 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-69b9cd8b79 |
SuccessfulCreate |
Created pod: controller-manager-69b9cd8b79-2w8mk | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-z7dl4 |
Killing |
Stopping container controller-manager | |
openshift-console |
replicaset-controller |
console-6689f89885 |
SuccessfulCreate |
Created pod: console-6689f89885-m729m | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6db848448f to 0 from 2 | |
openshift-console |
kubelet |
console-849dfdb48-8n92v |
Killing |
Stopping container console | |
openshift-console |
replicaset-controller |
console-6689f89885 |
SuccessfulCreate |
Created pod: console-6689f89885-9kjdn | |
openshift-console |
replicaset-controller |
console-6db848448f |
SuccessfulCreate |
Created pod: console-6db848448f-kcnxq | |
openshift-console |
replicaset-controller |
console-6db848448f |
SuccessfulDelete |
Deleted pod: console-6db848448f-kcnxq | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-795448867c to 2 from 3 | |
openshift-console |
replicaset-controller |
console-849dfdb48 |
SuccessfulDelete |
Deleted pod: console-849dfdb48-8n92v | |
| (x2) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdateFailed |
Failed to update Deployment.apps/console -n openshift-console: Operation cannot be fulfilled on deployments.apps "console": the object has been modified; please apply your changes to the latest version and try again |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again",Progressing changed from True to False ("All is well") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulDelete |
Deleted pod: controller-manager-795448867c-z7dl4 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-69b9cd8b79 to 1 from 0 | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-zbp7v |
Killing |
Stopping container oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulDelete |
Deleted pod: oauth-openshift-67d88f768b-zbp7v | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.") | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-c84b6d8c7 to 1 from 0 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DeploymentSyncDegraded: Operation cannot be fulfilled on deployments.apps \"console\": the object has been modified; please apply your changes to the latest version and try again" to "All is well",Progressing changed from False to True ("SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 1 replicas available") | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-67d88f768b to 2 from 3 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-c84b6d8c7 |
SuccessfulCreate |
Created pod: oauth-openshift-c84b6d8c7-9l5gh | |
openshift-console |
kubelet |
console-6689f89885-m729m |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-console |
multus |
console-6689f89885-m729m |
AddedInterface |
Add eth0 [10.130.0.104/23] from ovn-kubernetes | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-795448867c to 1 from 2 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8." to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3" | |
openshift-console |
kubelet |
console-6689f89885-m729m |
Created |
Created container console | |
openshift-console |
kubelet |
console-6689f89885-m729m |
Started |
Started container console | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulDelete |
Deleted pod: controller-manager-795448867c-2ht6p | |
openshift-controller-manager |
multus |
controller-manager-69b9cd8b79-2w8mk |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-2ht6p |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-69b9cd8b79 to 2 from 1 | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-2w8mk |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-2w8mk |
Created |
Created container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-69b9cd8b79 |
SuccessfulCreate |
Created pod: controller-manager-69b9cd8b79-6kfgx | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-2w8mk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-6kfgx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-controller-manager |
multus |
controller-manager-69b9cd8b79-6kfgx |
AddedInterface |
Add eth0 [10.129.0.88/23] from ovn-kubernetes | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-6kfgx |
Started |
Started container controller-manager | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3" | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-6kfgx |
Created |
Created container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-6kfgx |
ProbeError |
Readiness probe error: Get "https://10.129.0.88:8443/healthz": dial tcp 10.129.0.88:8443: connect: connection refused body: | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-6kfgx |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.88:8443/healthz": dial tcp 10.129.0.88:8443: connect: connection refused | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-69b9cd8b79-6kfgx became leader | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-795448867c to 0 from 1 | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-69b9cd8b79 to 3 from 2 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-69b9cd8b79 |
SuccessfulCreate |
Created pod: controller-manager-69b9cd8b79-cwcp4 | |
openshift-controller-manager |
kubelet |
controller-manager-795448867c-wt2cs |
Killing |
Stopping container controller-manager | |
openshift-controller-manager |
replicaset-controller |
controller-manager-795448867c |
SuccessfulDelete |
Deleted pod: controller-manager-795448867c-wt2cs | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-jdfxk | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-gbw9t | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-cqvsc | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-gbw9t |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-cqvsc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-cwcp4 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:94a979335b237d8f57396159f885bf08076c53fe44830b7966b155b40aad6f77" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-jdfxk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-cqvsc |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-gbw9t |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-cqvsc |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-controller-manager |
multus |
controller-manager-69b9cd8b79-cwcp4 |
AddedInterface |
Add eth0 [10.130.0.105/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-gbw9t |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e518fa7811c999ddcd42118e54e5c509cead66f831b08157bfe30f3330a86a95" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-jdfxk |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-jdfxk |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-cwcp4 |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-69b9cd8b79-cwcp4 |
Created |
Created container controller-manager | |
openshift-console |
kubelet |
console-849dfdb48-nwk7r |
Killing |
Stopping container console | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-cqvsc |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-console |
replicaset-controller |
console-849dfdb48 |
SuccessfulDelete |
Deleted pod: console-849dfdb48-nwk7r | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-gbw9t |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-jdfxk |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-849dfdb48 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-6689f89885 |
SuccessfulDelete |
Deleted pod: console-6689f89885-9kjdn | |
openshift-console |
replicaset-controller |
console-bf6f6f7f6 |
SuccessfulCreate |
Created pod: console-bf6f6f7f6-lk9bc | |
| (x10) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-bf6f6f7f6 to 2 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6689f89885 to 1 from 2 | |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected") |
openshift-console |
replicaset-controller |
console-bf6f6f7f6 |
SuccessfulCreate |
Created pod: console-bf6f6f7f6-bl5sv | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: changes made during sync updates, additional sync expected" to "SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 1 replicas available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/configmaps core/secrets]\nNodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14" to "EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/secrets]\nNodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 2, desired replicas is 3" to "Progressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6." | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-5486b44d46 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67866594b6 to 2 from 3 | |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5486b44d46 |
SuccessfulCreate |
Created pod: route-controller-manager-5486b44d46-6mvfq | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67866594b6-m5fxg | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-m5fxg |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-6mvfq |
Started |
Started container route-controller-manager | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-6mvfq |
Created |
Created container route-controller-manager | |
openshift-route-controller-manager |
multus |
route-controller-manager-5486b44d46-6mvfq |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-6mvfq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67866594b6 to 1 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67866594b6-2zw6c | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-5486b44d46 to 2 from 1 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-2zw6c |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5486b44d46 |
SuccessfulCreate |
Created pod: route-controller-manager-5486b44d46-22bql | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-22bql |
Created |
Created container route-controller-manager | |
openshift-authentication |
replicaset-controller |
oauth-openshift-c84b6d8c7 |
SuccessfulDelete |
Deleted pod: oauth-openshift-c84b6d8c7-9l5gh | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-c84b6d8c7 to 0 from 1 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-74c78cc4c7 to 1 from 0 | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-22bql |
Started |
Started container route-controller-manager | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulCreate |
Created pod: oauth-openshift-74c78cc4c7-nk55v | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-22bql |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." | |
openshift-route-controller-manager |
multus |
route-controller-manager-5486b44d46-22bql |
AddedInterface |
Add eth0 [10.130.0.106/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-67866594b6-phrd7 |
Killing |
Stopping container route-controller-manager | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-5486b44d46 to 3 from 2 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-67866594b6 |
SuccessfulDelete |
Deleted pod: route-controller-manager-67866594b6-phrd7 | |
openshift-authentication |
multus |
oauth-openshift-74c78cc4c7-nk55v |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-67866594b6-phrd7_54cebae7-0b99-4f2e-a57b-111372315dde stopped leading | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-5486b44d46 |
SuccessfulCreate |
Created pod: route-controller-manager-5486b44d46-rqg9k | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-nk55v |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-67866594b6 to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-nk55v |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-nk55v |
Started |
Started container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulCreate |
Created pod: oauth-openshift-74c78cc4c7-t6vbt | |
openshift-marketplace |
multus |
redhat-operators-cwhdg |
AddedInterface |
Add eth0 [10.129.0.89/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-wblgk |
Killing |
Stopping container oauth-openshift | |
openshift-route-controller-manager |
multus |
route-controller-manager-5486b44d46-rqg9k |
AddedInterface |
Add eth0 [10.129.0.90/23] from ovn-kubernetes | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-67d88f768b to 1 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-74c78cc4c7 to 2 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulDelete |
Deleted pod: oauth-openshift-67d88f768b-wblgk | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-5486b44d46-rqg9k_12e8fc0c-ec33-4290-afe2-ab6f9006af2c became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 13 to 14 because static pod is ready | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-rqg9k |
Unhealthy |
Readiness probe failed: Get "https://10.129.0.90:8443/healthz": dial tcp 10.129.0.90:8443: connect: connection refused | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-rqg9k |
ProbeError |
Readiness probe error: Get "https://10.129.0.90:8443/healthz": dial tcp 10.129.0.90:8443: connect: connection refused body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/secrets]\nNodeInstallerProgressing: 1 node is at revision 13; 2 nodes are at revision 14" to "EncryptionMigrationControllerProgressing: migrating resources to a new write key: [core/secrets]",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 13; 2 nodes are at revision 14" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14" | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Created |
Created container extract-utilities | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-rqg9k |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:684c31e18e49e61eabbde15593636c29ed8da8ea1b0adbae033bf498de5af5c2" already present on machine | |
openshift-console |
kubelet |
console-bf6f6f7f6-lk9bc |
Started |
Started container console | |
openshift-console |
multus |
console-bf6f6f7f6-lk9bc |
AddedInterface |
Add eth0 [10.129.0.91/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-rqg9k |
Created |
Created container route-controller-manager | |
openshift-console |
kubelet |
console-bf6f6f7f6-lk9bc |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-5486b44d46-rqg9k |
Started |
Started container route-controller-manager | |
openshift-console |
kubelet |
console-bf6f6f7f6-lk9bc |
Created |
Created container console | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Started |
Started container extract-utilities | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-encryption-migration-controller-encryptionmigrationcontroller |
kube-apiserver-operator |
SecretUpdated |
Updated Secret/encryption-key-openshift-kube-apiserver-1 -n openshift-config-managed because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 14") | |
openshift-console |
kubelet |
console-bf6f6f7f6-bl5sv |
Started |
Started container console | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-jdfxk |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-console |
multus |
console-bf6f6f7f6-bl5sv |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-bf6f6f7f6-bl5sv |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:efa3fbc785b612ca8319098563824d6869f53296841d9514db4a9ca106fa361c" already present on machine | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-gbw9t |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
openshift-console |
kubelet |
console-bf6f6f7f6-bl5sv |
Created |
Created container console | |
| (x3) | openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-cqvsc |
Unhealthy |
Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1 |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") |
openshift-console |
replicaset-controller |
console-6689f89885 |
SuccessfulDelete |
Deleted pod: console-6689f89885-m729m | |
openshift-console |
kubelet |
console-6689f89885-m729m |
Killing |
Stopping container console | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-6689f89885 to 0 from 1 | |
| (x3) | openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 1 replicas available" to "SyncLoopRefreshProgressing: working toward version 4.18.0-0.ci.test-2024-10-24-124520-ci-op-2fcpj5j6-latest, 2 replicas available" |
| (x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-t6vbt |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-authentication |
multus |
oauth-openshift-74c78cc4c7-t6vbt |
AddedInterface |
Add eth0 [10.130.0.107/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-t6vbt |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-t6vbt |
Started |
Started container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulCreate |
Created pod: oauth-openshift-74c78cc4c7-qwdfr | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Created |
Created container extract-utilities | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-74c78cc4c7 to 3 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-67d88f768b to 0 from 1 | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-67d88f768b-dqtrl |
Killing |
Stopping container oauth-openshift | |
openshift-marketplace |
multus |
certified-operators-wnhvq |
AddedInterface |
Add eth0 [10.129.0.92/23] from ovn-kubernetes | |
openshift-authentication |
replicaset-controller |
oauth-openshift-67d88f768b |
SuccessfulDelete |
Deleted pod: oauth-openshift-67d88f768b-dqtrl | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Started |
Started container extract-utilities | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{ + string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, ... // 3 identical entries "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, "send-retry-after-while-not-ready-once": []any{string("false")}, "service-account-issuer": []any{ + string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, - "service-account-jwks-uri": []any{ - string("https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/openid/v1/jwks"), - }, }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries } | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from https://kubernetes.default.svc to https://first.foo.bar | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n-\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\"api-audiences\": []any{string(\"https://first.foo.bar\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t... // 3 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveServiceAccountIssuer |
ServiceAccount issuer changed from https://kubernetes.default.svc to https://first.foo.bar | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Desired ServiceAccountIssuer "https://first.foo.bar" is now active issuer. Previous issuer "https://kubernetes.default.svc" is trusted until 2024-10-25 14:09:08.718285479 +0000 UTC m=+89462.569905324 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 8, desired generation is 9.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Killing |
Stopping container oauth-apiserver | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-58d4d69dd7 |
SuccessfulCreate |
Created pod: apiserver-58d4d69dd7-f2th2 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulDelete |
Deleted pod: apiserver-6d668d4fc7-d9msv | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-58d4d69dd7 to 1 from 0 | |
openshift-authentication |
multus |
oauth-openshift-74c78cc4c7-qwdfr |
AddedInterface |
Add eth0 [10.129.0.93/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-qwdfr |
Created |
Created container oauth-openshift | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 8, desired generation is 9.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-qwdfr |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-qwdfr |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 15 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{ + string("https://second.foo.bar"), string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, ... // 3 identical entries "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, "send-retry-after-while-not-ready-once": []any{string("false")}, "service-account-issuer": []any{ + string("https://second.foo.bar"), string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries } | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from https://first.foo.bar to https://second.foo.bar | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Desired ServiceAccountIssuer "https://second.foo.bar" is now active issuer. Previous issuer "https://first.foo.bar" is trusted until 2024-10-25 14:09:23.900225894 +0000 UTC m=+89477.751845779 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n-\u00a0\t\t\"api-audiences\": []any{string(\"https://first.foo.bar\")},\n+\u00a0\t\t\"api-audiences\": []any{string(\"https://second.foo.bar\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t... // 3 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveServiceAccountIssuer |
ServiceAccount issuer changed from https://first.foo.bar to https://second.foo.bar | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
BackOff |
Back-off pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Failed |
Error: ErrImagePull | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Failed |
Failed to pull image "registry.redhat.io/redhat/redhat-operator-index:v4.18": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.redhat.io/redhat/redhat-operator-index:v4.18: pinging container registry registry.redhat.io: Get "https://registry.redhat.io/v2/": dial tcp 23.65.20.153:443: i/o timeout | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Failed |
Error: ImagePullBackOff | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-15 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-15 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-58d4d69dd7 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-15 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-586d87f8b7 |
SuccessfulCreate |
Created pod: apiserver-586d87f8b7-xg64h | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-586d87f8b7 to 1 from 0 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 9, desired generation is 10." | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-58d4d69dd7 |
SuccessfulDelete |
Deleted pod: apiserver-58d4d69dd7-f2th2 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-15 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-15 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 9, desired generation is 10." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-15 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Started |
Started container extract-content | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-15 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.662s (1.662s including waiting). Image size: 1411450299 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{ "api-audiences": []any{ - string("https://second.foo.bar"), - string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, ... // 3 identical entries "runtime-config": []any{string("admissionregistration.k8s.io/v1beta1=true")}, "send-retry-after-while-not-ready-once": []any{string("false")}, "service-account-issuer": []any{ - string("https://second.foo.bar"), - string("https://first.foo.bar"), string("https://kubernetes.default.svc"), }, + "service-account-jwks-uri": []any{ + string("https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443/openid/v1/jwks"), + }, }, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, ... // 3 identical entries } | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from https://second.foo.bar to https://kubernetes.default.svc | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n-\u00a0\t\t\"api-audiences\": []any{string(\"https://second.foo.bar\")},\n+\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t... // 3 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveServiceAccountIssuer |
ServiceAccount issuer changed from https://second.foo.bar to https://kubernetes.default.svc | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 925ms (925ms including waiting). Image size: 896974229 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-15 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Started |
Started container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-15 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-15 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Failed |
Failed to pull image "registry.redhat.io/redhat/certified-operator-index:v4.18": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: pinging container registry registry.redhat.io: Get "https://registry.redhat.io/v2/": dial tcp 23.65.20.153:443: i/o timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-15 -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
kubelet |
redhat-operators-cwhdg |
Killing |
Stopping container registry-server | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulCreate |
Created pod: apiserver-6d668d4fc7-pn5rk | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-586d87f8b7 |
SuccessfulDelete |
Deleted pod: apiserver-586d87f8b7-xg64h | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-586d87f8b7 to 0 from 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 10, desired generation is 11." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-15 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-15 -n openshift-kube-apiserver because it was missing | |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-15 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-d9msv pod)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-15 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
multus |
apiserver-6d668d4fc7-pn5rk |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Created |
Created container oauth-apiserver | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Started |
Started container oauth-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 16 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 15 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-d9msv pod)" to "All is well" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-16 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-d9msv |
ProbeError |
Readiness probe error: Get "https://10.128.0.84:8443/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.94/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.108/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-15-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 14; 0 nodes have achieved new revision 15"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14; 0 nodes have achieved new revision 15" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-16 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 16 triggered by "required configmap/config has changed" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 14 to 15 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 14 is the oldest |
| (x2) | openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Failed |
Error: ErrImagePull |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Failed |
Failed to pull image "registry.redhat.io/redhat/certified-operator-index:v4.18": rpc error: code = DeadlineExceeded desc = initializing source docker://registry.redhat.io/redhat/certified-operator-index:v4.18: pinging container registry registry.redhat.io: Get "https://registry.redhat.io/v2/": dial tcp 23.12.19.13:443: i/o timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.95/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 14; 0 nodes have achieved new revision 15" to "NodeInstallerProgressing: 3 nodes are at revision 14; 0 nodes have achieved new revision 16",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14; 0 nodes have achieved new revision 15" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14; 0 nodes have achieved new revision 16" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
multus |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.109/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
| (x2) | openshift-marketplace |
kubelet |
certified-operators-wnhvq |
BackOff |
Back-off pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" |
| (x2) | openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Failed |
Error: ImagePullBackOff |
openshift-kube-apiserver |
multus |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
| (x3) | openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.96/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 16.657s (16.657s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 997ms (997ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Created |
Created container registry-server | |
openshift-marketplace |
multus |
redhat-marketplace-dfq7f |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.767s (1.767s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.023s (1.023s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-wnhvq |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-dfq7f |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
static-pod-installer |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 16 | |
| (x16) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 14 to 16 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 14; 0 nodes have achieved new revision 16" to "NodeInstallerProgressing: 2 nodes are at revision 14; 1 node is at revision 16",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 14; 0 nodes have achieved new revision 16" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 14; 1 node is at revision 16" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 14 to 16 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 14 is the oldest | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.110/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Started |
Started container extract-utilities | |
openshift-marketplace |
multus |
community-operators-9z5nm |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.978s (1.978s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.077s (1.077s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-9z5nm |
Killing |
Stopping container registry-server | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829655 | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829655-d7dgm |
AddedInterface |
Add eth0 [10.131.0.21/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829655-d7dgm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829655-d7dgm |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829655-d7dgm |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829655 |
SuccessfulCreate |
Created pod: collect-profiles-28829655-d7dgm | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829655, condition: Complete | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829655 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28829610 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
static-pod-installer |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 16 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_83ec63d2-139a-48d2-86a7-af4240c5e120 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 14 to 16 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 14; 1 node is at revision 16" to "NodeInstallerProgressing: 1 node is at revision 14; 2 nodes are at revision 16",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 14; 1 node is at revision 16" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 14; 2 nodes are at revision 16" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 14 to 16 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 with revision 14 is the oldest | |
openshift-kube-apiserver |
multus |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
static-pod-installer |
installer-16-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 16 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x16) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_15ac148f-aa4f-4edd-b69b-3f487b933129 became leader | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 16"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 14; 2 nodes are at revision 16" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 16" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 14 to 16 because static pod is ready | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Created |
Created container extract-utilities | |
openshift-marketplace |
multus |
redhat-operators-drwtw |
AddedInterface |
Add eth0 [10.129.0.97/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.51s (1.51s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.141s (1.141s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-drwtw |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
certified-operators-fbht5 |
AddedInterface |
Add eth0 [10.129.0.98/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 2.53s (2.53s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Created |
Created container extract-content | |
openshift-marketplace |
multus |
redhat-marketplace-hhxmm |
AddedInterface |
Add eth0 [10.128.0.101/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.334s (1.334s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 1.456s (1.456s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 921ms (921ms including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-fbht5 |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-hhxmm |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Created |
Created container extract-utilities | |
openshift-marketplace |
multus |
community-operators-m42qg |
AddedInterface |
Add eth0 [10.128.0.102/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 1.112s (1.112s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.618s (1.618s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-m42qg |
Killing |
Stopping container registry-server | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/user-serving-cert-000 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/user-serving-cert-002 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-001 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-001 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/user-serving-cert-001 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated secret: openshift-kube-apiserver/user-serving-cert-001 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 4 identical entries "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{ ... // 2 identical entries "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), "namedCertificates": []any{ ... // 3 identical elements map[string]any{"certFile": string("/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-s"...), "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-s"...)}, map[string]any{"certFile": string("/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-"...), "keyFile": string("/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-"...)}, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/tls.crt"), + "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/tls.key"), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-001/tls.crt"), + "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-001/tls.key"), + }, + map[string]any{ + "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-002/tls.crt"), + "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-002/tls.key"), + }, }, }, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 17 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-17 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 17 triggered by "required configmap/config has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.99/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.111/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.103/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
revision-pruner-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 16; 0 nodes have achieved new revision 17"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 16" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 16; 0 nodes have achieved new revision 17" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 16 to 17 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 16 is the oldest |
openshift-kube-apiserver |
multus |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.100/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
static-pod-installer |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
StaticPodInstallerCompleted |
Successfully installed revision 17 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x141) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x32) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
KubeAPIReadyz |
readyz=true | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829670-8jp8p |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829670-8jp8p |
AddedInterface |
Add eth0 [10.131.0.22/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829670-8jp8p |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829670-8jp8p |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829670 |
SuccessfulCreate |
Created pod: collect-profiles-28829670-8jp8p | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829670 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829670 |
Completed |
Job completed | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28829625 | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829670, condition: Complete | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 16 to 17 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 16; 0 nodes have achieved new revision 17" to "NodeInstallerProgressing: 2 nodes are at revision 16; 1 node is at revision 17",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 16; 0 nodes have achieved new revision 17" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 16; 1 node is at revision 17" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 16 to 17 because node ci-op-2fcpj5j6-f6035-2lklf-master-1 with revision 16 is the oldest |
openshift-kube-apiserver |
multus |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.112/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
static-pod-installer |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
StaticPodInstallerCompleted |
Successfully installed revision 17 | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
| (x47) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-pppfb |
AddedInterface |
Add eth0 [10.129.0.101/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.425s (1.425s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.526s (1.526s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Created |
Created container registry-server | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-2_f644c1a0-1789-4e3c-a209-1129a49fd157 became leader | |
openshift-marketplace |
kubelet |
redhat-operators-pppfb |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]shutdown ok readyz check failed | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-marketplace |
multus |
redhat-marketplace-h5n44 |
AddedInterface |
Add eth0 [10.128.0.104/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 8.121s (8.121s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 992ms (992ms including waiting). Image size: 896974229 bytes. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 16; 1 node is at revision 17" to "NodeInstallerProgressing: 1 node is at revision 16; 2 nodes are at revision 17",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 16; 1 node is at revision 17" to "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 16; 2 nodes are at revision 17" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-1" from revision 16 to 17 because static pod is ready |
openshift-marketplace |
kubelet |
redhat-marketplace-h5n44 |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 16 to 17 because node ci-op-2fcpj5j6-f6035-2lklf-master-2 with revision 16 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.105/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
static-pod-installer |
installer-17-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
StaticPodInstallerCompleted |
Successfully installed revision 17 | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
| (x47) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
| (x17) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-api-request-count-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/start-service-ip-repair-controllers ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-status-local-available-controller ok [+]poststarthook/apiservice-status-remote-available-controller ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
ci-op-2fcpj5j6-f6035-2lklf-master-1_c23d7253-a9f8-4278-bc42-f255a673ac1c became leader | |
openshift-marketplace |
multus |
certified-operators-gh6hl |
AddedInterface |
Add eth0 [10.129.0.102/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Created |
Created container extract-content | |
openshift-marketplace |
multus |
community-operators-24v2d |
AddedInterface |
Add eth0 [10.128.0.106/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.18" in 1.528s (1.528s including waiting). Image size: 955380483 bytes. | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.18" | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.498s (1.498s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.18" in 2.594s (2.594s including waiting). Image size: 1110454519 bytes. | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.248s (1.248s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
certified-operators-gh6hl |
Killing |
Stopping container registry-server | |
openshift-marketplace |
kubelet |
community-operators-24v2d |
Killing |
Stopping container registry-server | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 1m10s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
| (x8) | openshift-kube-apiserver |
kubelet |
kube-apiserver-guard-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
ProbeError |
Readiness probe error: Get "https://10.0.0.6:6443/readyz": dial tcp 10.0.0.6:6443: connect: connection refused body: |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:ec5ef6dda3c256bf641cce01c338f9e7ef0e2bf8821e90809d4e18fdc6759989" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "ci-op-2fcpj5j6-f6035-2lklf-master-2" from revision 16 to 17 because static pod is ready | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 17"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 node is at revision 16; 2 nodes are at revision 17" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 17" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-rbac-proxy -n openshift-machine-config-operator: cause by changes in data.config-file.yaml | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-002 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-000 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-001 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-001 | |
openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateRemoved |
Removed file for missing secret: openshift-kube-apiserver/user-serving-cert-001 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ ... // 4 identical entries "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{ ... // 2 identical entries "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), "namedCertificates": []any{ ... // 3 identical elements map[string]any{"certFile": string("/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-s"...), "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/internal-loadbalancer-s"...)}, map[string]any{"certFile": string("/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-"...), "keyFile": string("/etc/kubernetes/static-pod-resources/secrets/localhost-recovery-"...)}, - map[string]any{ - "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/tls.crt"), - "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-000/tls.key"), - }, - map[string]any{ - "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-001/tls.crt"), - "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-001/tls.key"), - }, - map[string]any{ - "certFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-002/tls.crt"), - "keyFile": string("/etc/kubernetes/static-pod-certs/secrets/user-serving-cert-002/tls.key"), - }, }, }, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/user-client-ca -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{\n\u00a0\u00a0\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n\u00a0\u00a0\t\tstring(\"//localhost(:|$)\"),\n+\u00a0\t\tstring(\"//valid.domain.com(:|$)\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.gcp-\"...), \"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t... // 2 identical entries\n\u00a0\u00a0}\n" |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{\n\u00a0\u00a0\t\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n\u00a0\u00a0\t\t\tstring(\"//localhost(:|$)\"),\n+\u00a0\t\t\tstring(\"//valid.domain.com(:|$)\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{string(\"https://10.0.0.3:2379\"), string(\"https://10.0.0.4:2379\"), string(\"https://10.0.0.6:2379\")},\n\u00a0\u00a0\t\t... // 2 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)" "//valid.domain.com(:|$)"] | |
| (x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)" "//valid.domain.com(:|$)"] |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Started |
Started container extract-utilities | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)" "//valid.domain.com(:|$)"] |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "encryption-provider-config": []any{string("/etc/kubernetes/static-pod-resources/secrets/encryption-config/e"...)}, ...}, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{ string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)"), + string("//valid.domain.com(:|$)"), }, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-operators-vn4tl |
AddedInterface |
Add eth0 [10.129.0.103/23] from ovn-kubernetes | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)"] |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//domain.foreign.it(:|$)" "//localhost(:|$)" "//something.*.now(:|$)"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{\n\u00a0\u00a0\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n-\u00a0\t\tstring(\"//domain.foreign.it(:|$)\"),\n\u00a0\u00a0\t\tstring(\"//localhost(:|$)\"),\n-\u00a0\t\tstring(\"//something.*.now(:|$)\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.gcp-\"...), \"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t... // 2 identical entries\n\u00a0\u00a0}\n" | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "encryption-provider-config": []any{string("/etc/kubernetes/static-pod-resources/secrets/encryption-config/e"...)}, ...}, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{ string(`//127\.0\.0\.1(:|$)`), - string("//domain.foreign.it(:|$)"), string("//localhost(:|$)"), - string("//something.*.now(:|$)"), }, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]any{ "admission": map[string]any{"pluginConfig": map[string]any{"PodSecurity": map[string]any{"configuration": map[string]any{"defaults": map[string]any{"audit": string("restricted"), "audit-version": string("latest"), "enforce": string("restricted"), "enforce-version": string("latest"), ...}}}, "network.openshift.io/ExternalIPRanger": map[string]any{"configuration": map[string]any{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]any{"configuration": map[string]any{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []any{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]any{"api-audiences": []any{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []any{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []any{string("v1")}, "encryption-provider-config": []any{string("/etc/kubernetes/static-pod-resources/secrets/encryption-config/e"...)}, ...}, "authConfig": map[string]any{"oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/o"...)}, "corsAllowedOrigins": []any{ string(`//127\.0\.0\.1(:|$)`), + string("//domain.foreign.it(:|$)"), string("//localhost(:|$)"), - string("//valid.domain.com(:|$)"), + string("//something.*.now(:|$)"), }, "imagePolicyConfig": map[string]any{"internalRegistryHostname": string("image-registry.openshift-image-registry.svc:5000")}, "servicesSubnet": string("172.30.0.0/16"), "servingInfo": map[string]any{"bindAddress": string("0.0.0.0:6443"), "bindNetwork": string("tcp4"), "cipherSuites": []any{string("TLS_AES_128_GCM_SHA256"), string("TLS_AES_256_GCM_SHA384"), string("TLS_CHACHA20_POLY1305_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), ...}, "minTLSVersion": string("VersionTLS12"), ...}, } | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.18" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//domain.foreign.it(:|$)" "//localhost(:|$)" "//something.*.now(:|$)"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//domain.foreign.it(:|$)" "//localhost(:|$)" "//something.*.now(:|$)"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{\n\u00a0\u00a0\t\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n+\u00a0\t\t\tstring(\"//domain.foreign.it(:|$)\"),\n\u00a0\u00a0\t\t\tstring(\"//localhost(:|$)\"),\n-\u00a0\t\t\tstring(\"//valid.domain.com(:|$)\"),\n+\u00a0\t\t\tstring(\"//something.*.now(:|$)\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{string(\"https://10.0.0.3:2379\"), string(\"https://10.0.0.4:2379\"), string(\"https://10.0.0.6:2379\")},\n\u00a0\u00a0\t\t... // 2 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{\n\u00a0\u00a0\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n+\u00a0\t\tstring(\"//domain.foreign.it(:|$)\"),\n\u00a0\u00a0\t\tstring(\"//localhost(:|$)\"),\n-\u00a0\t\tstring(\"//valid.domain.com(:|$)\"),\n+\u00a0\t\tstring(\"//something.*.now(:|$)\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"https://console-openshift-console.apps.ci-op-2fcpj5j6-f6035.gcp-\"...), \"loginURL\": string(\"https://api.ci-op-2fcpj5j6-f6035.XXXXXXXXXXXXXXXXXXXXXX:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t... // 2 identical entries\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{\n\u00a0\u00a0\t\t\tstring(`//127\\.0\\.0\\.1(:|$)`),\n-\u00a0\t\t\tstring(\"//domain.foreign.it(:|$)\"),\n\u00a0\u00a0\t\t\tstring(\"//localhost(:|$)\"),\n-\u00a0\t\t\tstring(\"//something.*.now(:|$)\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"encryption-provider-config\": []any{string(\"/var/run/secrets/encryption-config/encryption-config\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{string(\"https://10.0.0.3:2379\"), string(\"https://10.0.0.4:2379\"), string(\"https://10.0.0.6:2379\")},\n\u00a0\u00a0\t\t... // 2 identical entries\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAdditionalCORSAllowedOrigins |
corsAllowedOrigins changed to ["//127\\.0\\.0\\.1(:|$)" "//localhost(:|$)"] | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.18" in 1.825s (1.825s including waiting). Image size: 1411450299 bytes. | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
| (x2) | openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
ConfigMapUpdated |
Updated ConfigMap/metrics-client-ca -n openshift-monitoring: cause by changes in data.client-ca.crt |
| (x2) | openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/client-ca |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Created |
Created container registry-server | |
| (x2) | openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/client-ca |
| (x2) | openshift-kube-apiserver |
cert-regeneration-controller-manage-client-ca-bundle-recovery-controller |
ci-op-2fcpj5j6-f6035-2lklf-master-0 |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-apiserver: cause by changes in data.ca-bundle.crt |
| (x2) | openshift-kube-apiserver |
cert-syncer-cert-sync-controller |
kube-apiserver-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-apiserver/client-ca |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.233s (1.233s including waiting). Image size: 896974229 bytes. | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulDelete |
Deleted pod: apiserver-6d668d4fc7-pn5rk | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Killing |
Stopping container oauth-apiserver | |
| (x9) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b485d54c8 |
SuccessfulCreate |
Created pod: apiserver-7b485d54c8-xf6hd | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
StartingNewRevision |
new revision 18 triggered by "required configmap/config has changed" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 11, desired generation is 12.") | |
| (x2) | openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6d668d4fc7 to 2 from 3 |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-7b485d54c8 to 1 from 0 | |
| (x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: cause by changes in data.ca-bundle.crt |
| (x2) | openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/client-ca |
| (x2) | openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/client-ca |
| (x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/client-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt |
| (x2) | openshift-kube-controller-manager |
cert-syncer-cert-sync-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
CertificateUpdated |
Wrote updated configmap: openshift-kube-controller-manager/client-ca |
openshift-monitoring |
replicaset-controller |
metrics-server-5ffb7997c |
SuccessfulCreate |
Created pod: metrics-server-5ffb7997c-2fmcw | |
openshift-monitoring |
replicaset-controller |
metrics-server-5ffb7997c |
SuccessfulCreate |
Created pod: metrics-server-5ffb7997c-krp7q | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-7f98b5f8b5 to 1 from 2 | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-5ffb7997c to 2 from 1 | |
openshift-monitoring |
replicaset-controller |
metrics-server-7f98b5f8b5 |
SuccessfulDelete |
Deleted pod: metrics-server-7f98b5f8b5-9v6xq | |
openshift-marketplace |
kubelet |
redhat-operators-vn4tl |
Killing |
Stopping container registry-server | |
openshift-monitoring |
cluster-monitoring-operator |
cluster-monitoring-operator |
SecretCreated |
Created Secret/metrics-server-3eo9et645ffii -n openshift-monitoring because it was missing | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled up replica set metrics-server-5ffb7997c to 1 | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
Killing |
Stopping container metrics-server | |
openshift-monitoring |
kubelet |
metrics-server-5ffb7997c-2fmcw |
Started |
Started container metrics-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-18 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
metrics-server-5ffb7997c-2fmcw |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:c5a45e83f3e8be25f8348cedcbff551e71e435742a84161872c7e020e49bcf31" already present on machine | |
openshift-monitoring |
multus |
metrics-server-5ffb7997c-2fmcw |
AddedInterface |
Add eth0 [10.129.2.16/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
metrics-server-5ffb7997c-2fmcw |
Created |
Created container metrics-server | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6d668d4fc7 |
SuccessfulCreate |
Created pod: apiserver-6d668d4fc7-hfkd9 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-74c78cc4c7 to 2 from 3 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-7b485d54c8 to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-qwdfr |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-9958d4595 to 1 from 0 | |
| (x14) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontroller-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-authentication |
replicaset-controller |
oauth-openshift-9958d4595 |
SuccessfulCreate |
Created pod: oauth-openshift-9958d4595-s94mv | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-7b485d54c8 |
SuccessfulDelete |
Deleted pod: apiserver-7b485d54c8-xf6hd | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulDelete |
Deleted pod: oauth-openshift-74c78cc4c7-qwdfr | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 11, desired generation is 12." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 11, desired generation is 12.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." | |
| (x2) | openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6d668d4fc7 to 3 from 2 |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 11, desired generation is 12.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 12, desired generation is 13.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-18 -n openshift-kube-apiserver because it was missing | |
| (x3) | openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-openshift-authentication-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-18 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 12, desired generation is 13.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 12, desired generation is 13.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-pn5rk pod)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 12, desired generation is 13.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-9958d4595 |
SuccessfulDelete |
Deleted pod: oauth-openshift-9958d4595-s94mv | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-9958d4595 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-66d787f86d |
SuccessfulCreate |
Created pod: oauth-openshift-66d787f86d-ln9xx | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 6, desired generation is 7." | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-66d787f86d to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-18 -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
replicaset-controller |
metrics-server-7f98b5f8b5 |
SuccessfulDelete |
Deleted pod: metrics-server-7f98b5f8b5-p26dm | |
openshift-monitoring |
deployment-controller |
metrics-server |
ScalingReplicaSet |
Scaled down replica set metrics-server-7f98b5f8b5 to 0 from 1 | |
openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Killing |
Stopping container metrics-server | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 6, desired generation is 7." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" | |
| (x8) | openshift-authentication-operator |
cluster-authentication-operator-oauthserver-workloadworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-18 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-18 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/encryption-config-18 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-ln9xx |
Started |
Started container oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-66d787f86d-ln9xx |
AddedInterface |
Add eth0 [10.129.0.104/23] from ovn-kubernetes | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-ln9xx |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-ln9xx |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-t6vbt |
Killing |
Stopping container oauth-openshift | |
openshift-authentication |
replicaset-controller |
oauth-openshift-66d787f86d |
SuccessfulCreate |
Created pod: oauth-openshift-66d787f86d-n8zks | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulDelete |
Deleted pod: oauth-openshift-74c78cc4c7-t6vbt | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-74c78cc4c7 to 1 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-66d787f86d to 2 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-18 -n openshift-kube-apiserver because it was missing | |
| (x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-18 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-18 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 18 triggered by "required configmap/config has changed" | |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [+]etcd-readiness ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/openshift.io-StartUserInformer ok [+]poststarthook/openshift.io-StartOAuthInformer ok [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok [-]shutdown failed: reason withheld readyz check failed |
| (x10) | openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-pn5rk |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-oauth-apiserver |
multus |
apiserver-6d668d4fc7-hfkd9 |
AddedInterface |
Add eth0 [10.128.0.107/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Started |
Started container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-6d668d4fc7-pn5rk pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6d668d4fc7-hfkd9 pod)" | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4b5eb36759d58a301131bbcac37a0fcb2796226636668c66462fed20710d4a1c" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Started |
Started container oauth-apiserver | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-oauth-apiserver |
kubelet |
apiserver-6d668d4fc7-hfkd9 |
Created |
Created container oauth-apiserver | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.105/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-b8249 namespace | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-kube-apiserver |
multus |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
AddedInterface |
Add eth0 [10.130.0.113/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-1 |
Started |
Started container pruner | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-6d668d4fc7-hfkd9 pod)" to "All is well" | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-n8zks |
Created |
Created container oauth-openshift | |
openshift-authentication |
multus |
oauth-openshift-66d787f86d-n8zks |
AddedInterface |
Add eth0 [10.130.0.114/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
PodCreated |
Created Pod/revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-2 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-n8zks |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-66d787f86d-n8zks |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:770cd3540e204f7a7bea6cd19d18e7f28e0891e8b7bc5eb4275e20ab45822564" already present on machine | |
openshift-authentication |
replicaset-controller |
oauth-openshift-74c78cc4c7 |
SuccessfulDelete |
Deleted pod: oauth-openshift-74c78cc4c7-nk55v | |
openshift-authentication |
kubelet |
oauth-openshift-74c78cc4c7-nk55v |
Killing |
Stopping container oauth-openshift | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Started |
Started container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Created |
Created container pruner | |
openshift-kube-apiserver |
kubelet |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-authentication |
replicaset-controller |
oauth-openshift-66d787f86d |
SuccessfulCreate |
Created pod: oauth-openshift-66d787f86d-9frbd | |
openshift-kube-apiserver |
multus |
revision-pruner-18-ci-op-2fcpj5j6-f6035-2lklf-master-2 |
AddedInterface |
Add eth0 [10.128.0.108/23] from ovn-kubernetes | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-66d787f86d to 3 from 2 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-74c78cc4c7 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "ci-op-2fcpj5j6-f6035-2lklf-master-0" from revision 17 to 18 because node ci-op-2fcpj5j6-f6035-2lklf-master-0 with revision 17 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 17; 0 nodes have achieved new revision 18"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 17" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 17; 0 nodes have achieved new revision 18" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
AddedInterface |
Add eth0 [10.129.0.108/23] from ovn-kubernetes | |
openshift-kube-apiserver |
kubelet |
installer-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-18-ci-op-2fcpj5j6-f6035-2lklf-master-0 |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:adc55330a214af3a57e2fac059ed409c2a7b1dc54d829a3ad1f719ce6c15ffa0" already present on machine | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Created |
Created container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-marketplace |
multus |
redhat-marketplace-zpgfj |
AddedInterface |
Add eth0 [10.128.0.110/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Started |
Started container extract-utilities | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" | |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
| (x5) | openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-9v6xq |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.18" in 8.847s (8.847s including waiting). Image size: 967040755 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Created |
Created container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Started |
Started container extract-content | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Pulling |
Pulling image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Pulled |
Successfully pulled image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:e65b66e972497e28a2213c43a98c67236de6efaf433d6b2f0843d425efdc86d8" in 1.371s (1.371s including waiting). Image size: 896974229 bytes. | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Started |
Started container registry-server | |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]metric-storage-ready ok [+]metric-informer-sync ok [+]metadata-informer-sync ok [-]shutdown failed: reason withheld readyz check failed |
| (x4) | openshift-monitoring |
kubelet |
metrics-server-7f98b5f8b5-p26dm |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-marketplace |
kubelet |
redhat-marketplace-zpgfj |
Killing |
Stopping container registry-server | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-28829685-ss2qp |
AddedInterface |
Add eth0 [10.131.0.24/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-28829685 | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829685-ss2qp |
Pulled |
Container image "registry.build02.ci.openshift.org/ci-op-2fcpj5j6/stable@sha256:4aa67ce7113d1b4dd76198d58084bdccc40df3b8f8f23546d31fe0f6f6a14a69" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829685-ss2qp |
Created |
Created container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-28829685-ss2qp |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829685 |
SuccessfulCreate |
Created pod: collect-profiles-28829685-ss2qp | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulDelete |
Deleted job collect-profiles-28829640 | |
| (x2) | openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-28829685, condition: Complete |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-28829685 |
Completed |
Job completed |